TY - THES A1 - Norberto Sales, Juliano Efson T1 - An Explainable Semantic Parser for End-User Development N2 - Programming is a key skill in a world where businesses are driven by digital transformations. Although many of the programming demand can be addressed by a simple set of instructions composing libraries and services available in the web, non-technical professionals, such as domain experts and analysts, are still unable to construct their own programs due to the intrinsic complexity of coding. Among other types of end-user development, natural language programming has emerged to allow users to program without the formalism of traditional programming languages, where a tailored semantic parser can translate a natural language utterance to a formal command representation able to be processed by a computational machine. Currently, semantic parsers are typically built on the top of a learning method that defines its behaviours based on the patterns behind a large training data, whose production frequently are costly and time-consuming. Our research is devoted to study and propose a semantic parser for natural language commands targeting a scenario with low availability of training data. Our proposed semantic parser follows a multi-component architecture, composed of a specialised shallow parser that associates natural language commands to predicate-argument structures, integrated to a distributional ranking model that matches the command to a function signature available from an API knowledge base. Systems developed with statistical learning models and complex linguistics resources, as the proposed semantic parser, do not provide natively an easy way to associate a single feature from the input data to the impact in system behaviour. In this scenario, end-user explanations for intelligent systems has become a strong requirement to increase user confidence and system literacy. Thus, our research designed an explanation model for the proposed semantic parser that fits the heterogeneity of its multi-component architecture. The explanation model explores a hierarchical representation in an increasing degree of technical depth, providing higher-level explanations in the initial layers, going gradually to those that demand technical knowledge, applying different explanation strategies to better express the approach behind each component. With the support of a user-centred experiment, we compared the utility of different types of explanations and the impact of background knowledge in their preferences. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10718 ER - TY - THES A1 - Koçer, Volkan T1 - Three Essays on Global and Local Brands N2 - In three essays, this dissertation examines the past, present and future of branding in an international context, contributing to the research area of global/local brands, while also offering managers valuable insights for their branding strategies. The first essay provides scholars and practitioners a detailed state of the art of global/local brand research and proposes promising angles for future research, especially considering major challenges for our societies. The second essay incorporates the segment of cosmopolitan consumers into perceived brand globalness/localness research. Theoretically grounded in the concepts of social identity theory and complexity, the essay builds on perceived brand globalness/localness to analyze how cosmopolitans arrange both their global and local orientations. Aside offering scholars a new theoretical lens regarding consumer cosmopolitanism, managers can benefit from the gained insights, if cosmopolitans are a particular target group in their business strategy. The third and final essay meta-analytically investigates how the variables perceived brand globalness and localness materialize on various key outcome variables. At heart of this essay is a comparison of both perceived brand globalness and localness, offering scholars and practitioners valuable empirical insights on similarities and differences between their effects on outcomes such as brand quality. KW - global brands KW - local brands KW - perceived brand globalness KW - perceived brand localness KW - international marketing Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10722 ER - TY - THES A1 - Tomek, Raphaela T1 - Quiet please! - School Noise and its Effects on Student Teachers and Practicing Teachers N2 - Lärm in Schulen gilt aus unterschiedlichen Gründen als massiver Belastungsfaktor für Lehrkräfte und kann deshalb zu Leistungsdefiziten im Beruf, aber auch zu physischen und psychischen Beeinträchtigungen führen. Die Erforschung von Schullärm und seinen Auswirkungen ist daher essentiell. Die drei Studien, die in dieser Arbeit vorgestellt werden, untersuchten die unmittelbaren Auswirkungen von Lärm auf Lehramtsstudierende und die mittelbaren Auswirkungen auf praktizierende Lehrkräfte. In der ersten und zweiten Studie wurde im Rahmen zweier Experimente überprüft, wie sich Pausenlärm auf das Stresserleben, die Leistung in einem Konzentrationstest und auf die Fehlerkorrektur eines Diktats auswirkt. Auf Grundlage des transaktionalen Stressmodells (Lazarus & Folkman, 1984) wurde vermutet, dass Lärm zu einer Erhöhung des Stresserlebens führt. Der Maximal-Adaptability-Theorie (Hancock & Warm, 1989, 2003) nach sollte der Lärm zunächst eine optimale Leistung, langfristig jedoch eine Leistungsbeeinträchtigung verursachen. Um dies zu überprüfen, bearbeiteten in der ersten Studie 74 und in der zweiten Studie 104 Lehramtsstudierende der Universität Passau zwei unterschiedliche Konzentrationstests und korrigierten das Diktat eines Schülers, während sie kurzen, kontinuierlichen oder keinen Pausenlärm hörten. In beiden Experimenten führte kontinuierlicher Lärm zu einer Erhöhung des Stresserlebens. Weder kurzer noch kontinuierlicher Lärm führte zu einer Verschlechterung der Konzentrationsleistung. Weiter zeigten sich unterschiedliche Befunde: Im ersten Experiment führte ein kurzer Konzentrationstest in Kombination mit kontinuierlichem Lärm zu positiven Effekten in der Diktatkorrektur, d.h. die Versuchspersonen wiesen eine bessere Leistung in der Fehlerkorrektur auf. Im zweiten Experiment führte ein langer Konzentrationstest in Kombination mit kurzem oder kontinuierlichem Lärm zu negativen Effekten, d.h. die Probanden machten vergleichsweise mehr Fehler bei der anschließenden Diktatkorrektur. Daraus lässt sich schlussfolgern, dass Schullärm einerseits das Stresserleben erhöhen und andererseits die anschließende Leistungsfähigkeit der Lehrkräfte verbessern oder einschränken kann. Letzteres scheint allerdings von der konkreten Situation abzuhängen. Im ersten Teil der dritten Studie lag der Fokus auf den Bewältigungsstilen und dem erlebten Stress der Lehrkräfte. Da Bewältigungsstile nachweislich einen großen Einfluss auf die psychische Gesundheit haben, lag die Vermutung nahe, dass das durch Lärm verursachte Stresserleben je nach Bewältigungsstil unterschiedlich ausfällt. Auf der Grundlage des Belastungs-Beanspruchungsmodells (Rudow, 2000) und des transaktionalen Stressmodells (Lazarus & Folkman, 1984) wurde angenommen, dass Lehrkräfte mit riskanten Copingstilen mehr Stresssymptome erleben. Deshalb wurde im Rahmen einer Online-Studie untersucht, ob es in Bezug auf psychische und körperliche Symptome Unterschiede zwischen Lehrkräften mit distinkten Bewältigungsstilen gibt. Dazu wurden 99 bayerische Grund- und Mittelschullehrkräfte befragt. Aus den übergeordneten Skalen Engagement und Resilienz resultierten vier berufliche Bewältigungsstile. Der Typ Gesundheit (hohes Engagement, hohe Resilienz), der Schon-Typ (niedriges Engagement, hohe Resilienz), Typ A (hohes Engagement, niedrige Resilienz) und Typ Burnout (niedriges Engagement, niedrige Resilienz) unterschieden sich hinsichtlich Bedrohungseinschätzung, Lärmstress, Stimm- und Hörproblemen sowie lärmbedingtem Burnout. Im Vergleich zum Typ Gesundheit wiesen die Risikotypen Typ A und Typ Burnout ein höheres Stresserleben auf und erwiesen sich generell anfälliger gegenüber Schullärm als der Typ Gesundheit. Dies ist die erste Studie, die zeigen konnte, dass Schullärm besonders für Lehrkräfte mit riskanten Copingstilen eine Gefährdung darstellt. Im zweiten Teil der dritten Studie lag der Fokus auf den Wirkungspfaden von Schullärm. Hier wurden Zusammenhänge zwischen individuellen Eigenschaften der Lehrkräfte und den unterschiedlichen Auswirkungen von Schullärm vermutet. Basierend auf dem vereinfachten Modell von Lehrerstress (van Dick & Wagner, 2001) wurde an 159 bayerischen Grund- und Mittelschullehrkräften untersucht, ob Lärmstress und Stimmermüdung die Verbindung zwischen Lärmempfindlichkeit und lärmbedingtem Burnout vermitteln. Die Ergebnisse zeigten, dass Stress die Beziehung zwischen Lärmempfindlichkeit und Stimmermüdung vermittelte; Stimmermüdung vermittelte die Beziehung zwischen Lärmstress und lärmbezogenem Burnout; Lärmstress und Stimmermüdung vermittelten seriell die Beziehung zwischen Lärmempfindlichkeit und lärmbedingtem Burnout. Dies ist die erste Studie die Verbindungen zwischen lärmempfindlichen Lehrkräften und Lärmstress, Stimmproblemen und lärmbedingtem Burnout aufzeigen konnte. N2 - Noise in schools is considered a massive stress factor for teachers because of various reasons and can therefore lead to performance deficits in the job, but also to physical and psychological impairments. Consequently, research on school noise and its effects is essential. The three studies presented in this paper examined the immediate effects of noise on student teachers and the collateral effects on practicing teachers. In the first and second study, two experiments were conducted to examine the effects of noise during breaks on stress experience, performance in a concentration test, and error correction of a dictation. Based on the transactional stress model (Lazarus & Folkman, 1984), it was hypothesized that noise leads to an increase in stress experience. According to the maximal adaptability theory (Hancock & Warm, 1989, 2003), noise should initially cause optimal performance, but in the long run should cause performance impairment. For this purpose, in the first study 74 and in the second study 104 student teachers of the University of Passau worked on two different concentration tests and corrected a student’s dictation while listening to short, continuous, or no noise. In both experiments, continuous noise led to an increase in the experience of stress. Neither short nor continuous noise led to a deterioration in concentration performance. Further, different findings emerged: In the first experiment, a short concentration test in combination with continuous noise led to positive effects in dictation correction, i.e., subjects showed better performance in error correction. In the second experiment, a long concentration test combined with short or continuous noise resulted in negative effects, i.e., subjects made more errors in the subsequent correction of dictation. It can be concluded that school noise can, on the one hand, increase the experience of stress and, on the other hand, promote or limit teachers’ subsequent performance. The latest, however, seems to depend on the specific situation of the individual. In the first part of the third study, the focus was on teachers’ coping styles and experienced stress. Since coping styles have been shown to have a major impact on mental health, it was reasonable to assume that the stress experience caused by noise would vary depending on the coping style. Based on the stress-strain model (Rudow, 2000) and the transactional stress model (Lazarus & Folkman, 1984), it was hypothesized that teachers with risky coping styles experience more stress symptoms. Therefore, an online study was conducted to investigate whether there were differences in psychological and physical symptoms between teachers with different coping styles. For this purpose, 99 Bavarian elementary and middle school teachers were surveyed. Four professional coping styles resulted from the overarching scales of professional commitment and resilience. The healthy type (high commitment, high resilience), the unambitious type (low commitment, high resilience), type A (high commitment, low resilience), and type burnout (low commitment, low resilience) differed in terms of threat appraisal, noise stress, voice and hearing problems, and noise-related burnout. Compared to the healthy type, the risk types − type A and type burnout − exhibited higher stress experience and were generally more susceptible to school noise than the healthy type. This is the first study to show that school noise is particularly hazardous for teachers with risky coping styles. The second part of the third study focused on the impact pathways of school noise. Associations between teachers’ individual characteristics and the consequences of school noise were hypothesized. Based on the simplified model of teacher stress (van Dick & Wagner, 2001), we examined 159 Bavarian elementary and middle school teachers to determine whether noise stress and vocal fatigue mediate the association between noise sensitivity and noise-related burnout. Results indicated that noise stress mediated the relationship between noise sensitivity and vocal fatigue; vocal fatigue mediated the relationship between noise stress and noise-related burnout; noise stress and vocal fatigue serially mediated the relationship between noise sensitivity and noise-related burnout. This is the first study to show links between noise-sensitive teachers and noise stress, voice problems, and noise-related burnout. KW - school noise KW - teacher stress KW - student teachers KW - practicing teachers KW - performance Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10626 ER - TY - THES A1 - Steudner, Tobias T1 - Understanding Consumers' Digital Data Disclosure Decision-Making: A Focus on Data Sharing Cooperations, Perceived Risks and Low-Cognitive-Effort Processing N2 - Due to the advances of digitalization, firms are able to collect more and more personal consumer data and strive to do so. Moreover, many firms nowadays have a data sharing cooperation with other firms, so consumer data is shared with third parties. Accordingly, consumers are confronted regularly with the decision whether to disclose personal data to such a data sharing cooperation (DSC). Despite privacy research has become highly important, peculiarities of such disclosure settings with a DSC between firms have been neglected until now. To address this gap is the first research objective in this thesis. Another underexplored aspect in privacy research is the impact of low-cognitive-effort decision-making. This is because the privacy calculus, the most dominant theory in privacy research, assumes for consumers a purely cognitive effortful and deliberative disclosure decision-making process. Therefore, to expand this perspective and examine the impact of low-cognitive-effort decision-making is the second research objective in this thesis. Additionally, with the third research objective, this thesis strives to unify and increase the understanding of perceived privacy risks and privacy concerns which are the two major antecedents that reduce consumers’ disclosure willingness. To this end, five studies are conducted: i) essay 1 examines and compares consumers’ privacy risk perception in a DSC disclosure setting with disclosure settings that include no DSC, ii) essay 2 examines whether in a DSC disclosure setting consumers rely more strongly on low-cognitive-effort processing for their disclosure decision, iii) essay 3 explores different consumer groups that vary in their perception of how a DSC affects their privacy risks, iv) essay 4 refines the understanding of privacy concerns and privacy risks and examines via meta-analysis the varying effect sizes of privacy concerns and privacy risks on privacy behavior depending on the applied measurement approach, v) essay 5 examines via autobiographical recall the effects of consumers’ feelings and arousal on disclosure willingness. Overall, this thesis shines light on consumers’ personal data disclosure decision-making: essay 1 shows that the perceived risk associated with a disclosure in a DSC setting is not necessarily higher than to an identical firm without DSC. Also, essay 3 indicates that only for the smallest share of consumers a DSC has a negative impact on their disclosure willingness and that one third of consumers do not intensively think about consequences for their privacy risks arising through a DSC. Additionally, essay 2 shows that a stronger reliance on low-cognitive-effort processing is prevalent in DSC disclosure settings. Moreover, essay 5 displays that even unrelated feelings of consumers can impact their disclosure willingness, but the effect direction also depends on consumers’ arousal level. This thesis contributes in three ways to theory: i) it shines light on peculiarities of DSC disclosure settings, ii) it suggests mechanisms and results of low-effort processing, and iii) it enhances the understanding of perceived privacy risks and privacy concerns as well as their resulting effect sizes. Besides theoretical contributions, this thesis offers practical implications as well: it allows firms to adjust the disclosure setting and the communication with their consumers in a way that makes them more successful in data collection. It also shows that firms do not need to be too anxious about a reduced disclosure willingness due to being part of a DSC. However, it also helps consumers themselves by showing in which circumstances they are most vulnerable to disclose personal data. That consumers become conscious of situations in which they are especially vulnerable to disclose data could serve as a countermeasure: this could prevent that consumers disclose too much data and regret it afterwards. Similarly, this thesis serves as a thought-provoking input for regulators as it emphasizes the importance of low-cognitive-effort processing for consumers’ decision-making, thus regulators may be able to consider this in the future. In sum, this thesis expands knowledge on how consumers decide whether to disclose personal data, especially in DSC settings and regarding low-cognitive-effort processing. It offers a more unified understanding for antecedents of disclosure willingness as well as for consumers’ disclosure decision-making processes. This thesis opens up new research avenues and serves as groundwork, in particular for more research on data disclosures in DSC settings. KW - Privacy KW - Disclosure KW - Dual Processing KW - Emotion KW - Privacy Concerns Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10583 ER - TY - THES A1 - Lenz, Luciane T1 - The diffusion of modern energy technologies in low-income settings - evidence from rural sub-Saharan Africa N2 - This collection of three chapters responds to today’s energy challenges. It explores innovative policy aimed to equip the energy poor with access to improved cooking energy and electricity, looking both at the demand and supply side of modern energy technologies. Concretely, it discusses mechanisms to increase uptake of off-grid solar electricity in rural Rwanda based on experimental demand measurements (Chapter 1), it studies how to diffuse improved cooking technologies in rural Senegal via supply-side mechanisms (Chapter 2), and it identifies the need to target cooking technologies in consideration of the broader household context in rural Senegal and beyond (Chapter 3). KW - Technology adoption KW - Energy access KW - Rural development KW - Subsaharisches Afrika KW - Armut KW - Energietechnik KW - Erneuerbare Energien Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10550 ER - TY - THES A1 - Beierl, Stefan T1 - Public Works Programmes: Review of their effectiveness and empirical essays on their contribution to climate resilience and social cohesion N2 - Poverty, underemployment, lack of infrastructure, low agricultural productivity, degradation of natural resources, climate change, and eroding social cohesion are among the biggest challenges that many low and lower-middle income countries are facing. Objectives linked to addressing these pressing challenges have been ascribed to public works programmes (PWPs). These are social protection instruments which offer remuneration (in cash or kind) for vulnerable people in exchange for temporary work on labour-intensive low-skill activities with social benefits. PWPs are being implemented in around two out of three developing countries. Given the substantial amounts spent on PWPs, it is critical to know to what extent the expectations towards them are backed by evidence. This dissertation sheds light on this overarching question with three self-contained essays. The first essay synthesises the evidence from PWPs in Sub-Saharan Africa, guided by three questions: First, what can we infer from the available impact evaluations regarding the effectiveness of PWPs as a social protection instrument? Second, what do we know about the role of the wage vector, asset vector, and skills vector in this respect? Third, what can we infer about the role of design features in explaining differences in outcomes? The other two essays use empirical evidence from Malawi to address more specific questions regarding the potential of PWPs to strengthen climate resilience and the relationship between PWPs and social cohesion. What sets the evidence synthesis in my first essay apart from existing reviews of PWPs is that it accounts for their heterogeneity by systematically differentiating results by PWP type and outcome area (income, consumption and expenditures, labour supply, food security, nutrition, asset holdings, agricultural production and techniques, and education). Programmes that offer short-term ad-hoc employment (Type 1) are distinguished from programmes that offer more predictable employment over longer periods (Type 2). For the review of impacts, this paper relies solely on (quasi-)experimental studies, but for the analysis of the role of design factors also on other literature. In line with existing reviews, my results suggest that Type 1 programmes can effectively enable consumption smoothing in the wake of acute crises, whereas in contexts of chronic poverty, Type 2 programmes perform, on balance, better. Offering complementary access to extension services in Type 2 programmes can boost impacts further. However, in all cases, evidence is too scant and mixed to safely conclude whether the higher benefits of costlier PWP types justify the cost premium. The second essay investigates the potential of PWPs to strengthen climate resilience. Among the main social protection instruments, the biggest potential to strengthen climate resilience is often ascribed to PWPs if they create climate-smart community assets and transfer knowledge of climate-smart practices. Yet, there is a lack of evidence whether design changes to this end can indeed enhance the contribution of an existing PWP to climate resilience. I use a difference-in-differences approach based on two-period panel data to analyse how a modified PWP model performs compared to the standard model of Malawi’s largest PWP after 24 months. The key modification is to embed public works in a communal watershed management plan with a strong emphasis on collective action and capacity building. I find that the modified approach considerably increased communal watershed management activities through voluntary labour contributions on top of the paid public works labour. While this increase was mainly driven by PWP participants, non-participants also made substantial contributions. I also find a small increase in the adoption of soil and water conservation practices on respondents’ private land, especially by non-PWP participants. These findings imply that such modest changes can make PWPs climate-smarter. In particular, they can broaden the engagement in and adoption of climate-smart activities beyond the group of PWP participants. The co-authored third essay investigates the relationship between Malawi’s MASAF PWP and social cohesion, specifically within-community cooperation for the common good. Like the existing studies, we face the challenge that neither the assignment of the programme to communities nor the selection of individual participants is randomised. We try to mitigate the endogeneity concerns by triangulating fixed effects panel analyses for a set of outcomes and sectors using two datasets with different units of analysis (households and communities). We find that public works are positively associated with coordination activities and voluntary (unpaid) contributions to public goods, along both vertical ties (between community members and local leaders) and horizontal ties (among community members). Especially for school-building activities, voluntary inputs in the form of labour and other in-kind contributions are higher in the presence of the public works programme. Our results contribute to a better understanding of the link between social protection programmes with community-driven features and social cohesion. Overall, the findings of the three essays in this dissertation contribute to the knowledge base regarding effectiveness and potential of PWPs across a broad range of outcome areas. Specifically, they offer new insights how to harness the potential of PWP to strengthen climate resilience and into the seemingly positive relationship between PWPs and social cohesion. The findings can help researchers and policy makers who are interested specifically in PWPs or in any of the many objectives that can be pursued through PWPs. KW - Public Works KW - Social Protection KW - Social cohesion KW - Review KW - Climate resilience Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10491 ER - TY - THES A1 - Luck, Nathalie T1 - Transformation towards a more sustainable agricultural system in Indonesia: Empirical essays on the role of information and endorsement N2 - In many cases, transitioning towards sustainable agricultural production requires farmers to change their practices. These changes can include the adoption of sustainable agricultural practices, water-saving, or the disadoption of excessive chemical input use or land burning. Policy makers interested in making agricultural production more sustainable need to understand what encourages the uptake of sustainable practices and what is effective in reducing unsustainable practices. This thesis seeks to understand whether and how information provision and endorsement can contribute to the transition towards more sustainable agricultural systems. The thesis consists of three self-contained papers. The first paper explores the potential of religious endorsement for inducing pro-environmental behaviour and encouraging the disadoption of fire as an agricultural practice, thereby preventing forest fires. The paper analyses the impact of a fatwa (an Islamic religious ruling) on reducing fire incidence in Indonesia. Results indicate that fire incidence decreased in Muslim majority villages following the issuing of the fatwa. For the post-fatwa period from August 2016 to December 2019, the average monthly effect amounts to around 2.2 prevented fire events per village. This is a considerable effect. The paper concludes that fire prevention efforts, and potentially other environmental conservation efforts, could benefit significantly from support by religious institutions and stakeholders. The second paper investigates the role of information provision and training for the adoption of organic farming practices in Java, Indonesia. We use a randomised controlled trial (RCT) to identify the impact of a three-day hands-on training in organic farming for smallholder farmers. We find that the training intervention increased the adoption of organic inputs and had a positive and statistically significant effect on farmers’ knowledge and perceptions of organic farming. Overall, our findings suggest that information constraints are a barrier to the adoption of organic farming, as information provision increased the use of organic farming practices. The third paper investigates whether urban and suburban Indonesian consumers are willing to pay a price premium for organic food. We use an incentive-compatible auction based on the Becker-DeGroot-Marschak (BDM) approach to elicit consumers’ WTP. We further study the effect of income and a randomised information treatment about the benefits of organic food on respondents’ WTP. Estimates suggest that consumers are willing to pay a price premium for organic rice, on average 20 percent more than what they paid for conventional rice outside of our experiment. However, our results also indicate that raising consumers’ WTP further is complex. Showing participants a video about the health or, alternatively, environmental benefits of organic food was not effective in further raising WTP. Exposure to the environmental benefits video was, however, effective in raising stated organic food consumption intentions. KW - technology adoption, religion, organic farming, WTP KW - Landwirtschaft KW - Nachhaltigkeit KW - Entwicklungsökonomie Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10037 ER - TY - THES A1 - Behm, Svenia T1 - Four essays on statistical modelling of environmental data N2 - This dissertation deals with geostatistical, time series, and regression analytical approaches for modelling spatio-temporal processes, using air quality data in the applications. The work is structured into four essays the abstracts of which are given in the following. The first essay is titled 'Spatial detrending revisited: Modelling local trend patterns in NO2-concentration in Belgium and Germany'. It is written in co-authorship by Prof. Dr. Harry Haupt and Dr. Angelika Schmid and published in 2018 in Spatial Statistics 28, pp. 331-351 (https://doi.org/10.1016/j.spasta.2018.04.004). Abstract Short-term predictions of air pollution require spatial modelling of trends, heterogeneities, and dependencies. Two-step methods allow real-time computations by separating spatial detrending and spatial extrapolation into two steps. Existing methods discuss trend models for specific environments and require specification search. Given more complex environments, specification search gets complicated by potential nonlinearities and heterogeneities. This research embeds a nonparametric trend modelling approach in real-time two-step methods. Form and complexity of trends are allowed to vary across heterogeneous environments. The proposed method avoids ad hoc specifications and potential generated predictor problems in previous contributions. Examining Belgian and German air quality and land use data, local trend patterns are investigated in a data driven way and are compared to results computed with existing methods and variations thereof. An important aspect of our empirical illustration is the heterogeneity and superior performance of local trend patterns for both research regions. The findings suggest that a nonparametric spatial trend modelling approach is a valuable tool for real-time predictions of pollution variables: it avoids specification search, provides useful exploratory insights and reduces computational costs. The second essay is titled 'Predictability of hourly nitrogen dioxide concentration'. It is written in co-authorship with Prof. Dr. Harry Haupt and published in 2020 in Ecological Modelling 428, 109076 (https://doi.org/10.1016/j.ecolmodel.2020.109076). Abstract Temporal aggregation of air quality time series is typically used to investigate stylized facts of the underlying series such as multiple seasonal cycles. While aggregation reduces complexity, commonly used aggregates can suffer from non-representativeness or non-robustness. For example, definitions of specific events such as extremes are subjective and may be prone to data contaminations. The aim of this paper is to assess the predictability of hourly nitrogen dioxide concentrations and to explore how predictability depends on (i) level of temporal aggregation, (ii) hour of day, and (iii) concentration level. Exploratory tools are applied to identify structural patterns, problems related to commonly used aggregate statistics and suitable statistical modeling philosophies, capable of handling multiple seasonalities and non-stationarities. Hourly times series and subseries of daily measurements for each hour of day are used to investigate the predictability of pollutant levels for each hour of day, with prediction horizons ranging from one hour to one week ahead. Predictability is assessed by time series cross validation of a loss function based on out-of-sample prediction errors. Empirical evidence on hourly nitrogen dioxide measurements suggests that predictability strongly depends on conditions (i)-(iii) for all statistical models: for specific hours of day, models based on daily series outperform models based on hourly series, while in general predictability deteriorates with exposure level. The third essay is titled 'Agglomeration and infrastructure effects in land use regression models for air pollution – Specification, estimation, and interpretations'. It is written in co-authorship with Dr. Markus Fritsch and published in 2021 in Atmospheric Environment 253, 118337 (https://doi.org/10.1016/j.atmosenv.2021.118337). Abstract Established land use regression (LUR) techniques such as linear regression utilize extensive selection of predictors and functional form to fit a model for every data set on a given pollutant. In this paper, an alternative to established LUR modeling is employed, which uses additive regression smoothers. Predictors and functional form are selected in a data-driven way and ambiguities resulting from specification search are mitigated. The approach is illustrated with nitrogen dioxide (NO2) data from German monitoring sites using the spatial predictors longitude, latitude, altitude and structural predictors; the latter include population density, land use classes, and road traffic intensity measures. The statistical performance of LUR modeling via additive regression smoothers is contrasted with LUR modeling based on parametric polynomials. Model evaluation is based on goodness of fit, predictive performance, and a diagnostic test for remaining spatial autocorrelation in the error terms. Additionally, interpretation and counterfactual analysis for LUR modeling based on additive regression smoothers are discussed. Our results have three main implications for modeling air pollutant concentration levels: First, modeling via additive regression smoothers is supported by a specification test and exhibits superior in- and out-of-sample performance compared to modeling based on parametric polynomials. Second, different levels of prediction errors indicate that NO2 concentration levels observed at background and traffic/industrial monitoring sites stem from different processes. Third, accounting for agglomeration and infrastructure effects is important: NO2 concentration levels tend to increase around major cities, surrounding agglomeration areas, and their connecting road traffic network. The fourth essay is titled 'Outlier detection and visualisation in multi-seasonal time series and its application to hourly nitrogen dioxide concentration'. It is written in single authorship and has not been published yet. Abstract Outlier detection in data on air pollutant recordings is conducted to uncover data points that refer to either invalid measurements or valid but unusually high concentration levels. As air pollutant data is typically characterised by multiple seasonalities, the task of outlier detection is associated with the question of how to deal with such non-stationarities. The present work proposes a method that combines time series segmentation, seasonal adjustment, and standardisation of random variables. While the former two are employed to obtain subseries of homoskedastic data, the latter ensures comparability across the subseries. Further, the standardised version of the seasonally adjusted subseries represents a scaled measure for the outlyingness of each data point in the original time series from its mean and therefore forms a suitable basis for outlier detection. In an empirical application to data on hourly NO2 concentration levels recorded at a traffic monitoring site in Cologne, Germany, over the years 2016 to 2019, the common boxplot criterion is used to examine each standardised seasonally adjusted subseries for positive outliers. The results of the analyses are put into their natural temporal order and displayed in a heatmap layout that provides information on when single and sequential outliers occur. KW - Spatio-temporal modelling KW - time series analysis KW - regression KW - geostatistics KW - multiple seasonalities KW - environmental data KW - Statistik KW - Geostatistik KW - Umweltdaten Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10539 ER - TY - THES A1 - Zierke, Oliver T1 - Innovation and Competition in the Digital Economy: Implications for Internet Platforms, Telecommunications Networks and Data Sharing Initiatives N2 - Governments around the world currently focus on shaping the digital economy. Particular attention is paid to Internet platforms, Internet infrastructure and data as essential components of the digital economy. The three studies in this thesis contribute to the understanding of the behavior of firms in each of these domains and derive insights for future regulations and business projects. The first study deals with the ranking of content on Internet platforms and how it affects the incentives of content providers to invest in content quality. The focus of the study is on sponsored ranking and organic ranking, but the case that a vertically integrated content provider is favored by an Internet platform is also taken into account. Using a game theoretic model, it is shown that there is no ranking design that strictly leads to more investment compared to the other designs. It is also shown that the Internet platform usually chooses the type of ranking that, from the perspective of the Internet platform and consumers, yields the best expected overall content quality. The second study deals with the incentive of Internet service providers to throttle specific Internet content. The key finding is that Internet service providers use this instrument to utilize the capacity of their telecommunications network more efficiently. This leads not only to more benefits for Internet users, but also to a higher incentive to invest in network capacity due to better monetization. The third study examines the circumstances under which firms are willing to share data with other firms. By means of an economic laboratory experiment, it is shown that more data is shared if the firms have control over who exactly they share data with. Thus, for example, data pools that grant unrestricted data access to all participating firms can be expected to perform worse than data pools that give their participating firms control over with whom their uploaded data is shared. In addition, the third study finds that established relationships are characterized by more data sharing and less volatility in the amount of shared data than new relationships. The study concludes that data sharing projects should not be expected to work optimally right away. In summary, the studies in this thesis identify a number of costs that may arise when digital firms' choice is restricted by regulation or design. The ability of Internet service providers to throttle certain content and the ability of Internet platforms to choose the ranking design are usually used in the best interests of consumers. Data sharing also works best when firms are free to decide who gets their data. KW - online platform KW - ranking algorithm KW - sponsored search KW - platform regulation KW - net neutrality Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10006 ER - TY - THES A1 - Mini, Tobias T1 - An Exploration of Tensions in Centralized and Decentralized Digital Platform Contexts N2 - Digital platforms consist of technical elements such as software and hardware and associated social elements such as organizational processes and standards. When such social or technical elements seem logical individually but inconsistent when juxtaposed they form tensions. Prior research on platforms often focused on individual elements of digital platforms but neglected possible related and conflicting elements which offers limited insight about underlying tensions. While some studies on platforms considered tensions, they largely assumed that centralized platform owners being responsible for responding to tensions, neglecting collective response mechanisms in blockchain-based decentralized autonomous organizations (DAOs) where decentralized participants typically respond to tensions. The examination of tensions in the context of centralized platforms and decentralized autonomous organizations offers an opportunity to surface conflicting elements that form novel types of socio-technical tensions which require collective and technology-enabled response mechanisms. This thesis explored what tensions exist in centralized and decentralized digital platform contexts and how platform participants can respond to selected tensions. For this purpose, this thesis comprises five essays that employ multiple different research methods including interviews analyzed by using techniques of grounded theory, qualitative meta-analysis of published case studies, and systematic literature reviews. The findings derived from all five essays contribute to a better understanding of tensions in digital platforms. In particular, this thesis (1) offers a lens for analyzing platforms as collective organizations in which tensions arise at the collective meta-organizational level requiring collective responses, (2) identifies new tensions and response mechanisms related to generativity and collectivity, and (3) points to a novel category of socio-technical tensions that are especially salient in digital platforms. KW - Digital Platform KW - Decentralized Autonomous Organization KW - Tension KW - Grounded Theory Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11322 ER - TY - THES A1 - Resch, René Matthias T1 - Three Essays on Brand Management in the Business-to-Business Context: Brand Identity, Brand Culture, and Brand Essence N2 - This dissertation examines the overarching research question of how the suppliers’ brand management in the form of brand identity, brand culture, and brand essence influences buyer-seller relationships in three independent essays. In Essay 1, I address the structure, capabilities, and outcomes of brand identity from a supplier perspective. Through qualitative interviews with suppliers, I examine how widespread the concept of brand identity is in practice and what exactly practitioners understand by it. Going further, I look at what capabilities and conditions are necessary for brand identity to be successful and what outcomes suppliers hope to achieve. Using an Information-Display-Matrix (IDM) test and a sample of Master of Business Administration (MBA) students, I examine the relevance of brand functions in more detail. In Essay 2, I use a dyadic dataset with matched buyer-seller dyads to examine the causes and effects of perceptual congruence and incongruence of brand culture strength on the buyer-seller relationship, while considering relationship-specific investments and interaction mechanisms as moderating effects. I show that congruence and incongruence have different effects on customer loyalty and price sensitivity and that these are strongly context-dependent. In Essay 3, I deal with brand essence strength interactions and their effects on the buyer-seller relationship. I use a dyadic dataset with matched buyer-seller dyads to show how brand essence strength influences customer loyalty and customer profitability, and how it interacts with key customer attitudes and other important buyer-seller relationship closeness indicators. This dissertation makes a significant contribution to the literature on brand identity, brand culture, and brand essence in buyer-seller relationships. Furthermore, my dissertation offers practical implications for managers at B2B suppliers who (re)shape their brand management with a focus on the inner parts of the brand. KW - brand management KW - business-to-business KW - brand identity KW - brand culture KW - brand essence Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11589 ER - TY - THES A1 - Liang, Hanning T1 - Deflectometric Measurement of the Topography of Reflecting Freeform Surfaces in Motion N2 - Measuring the topography of specular surfaces with strong surface structures in motion was impossible before this research. A new method based on singleshot phase-measuring de ectometry (SSPMD) and combining different solution aspects has been presented. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11672 ER - TY - THES A1 - Wharton, David T1 - Language, Orthography and Buddhist Manuscript Culture of the Tai Nuea - an apocryphal jātaka text in Mueang Sing, Laos N2 - The focus of this study is a single manuscript in a Tai Nuea village near Mueang Sing in northwestern Laos, copied in 1935 and entitled Pukthanusati (Pali buddhānussati) ‘the Recollection of the Buddha.’ It is written in the Tai Nuea language and Lik Tho Ngok or ‘Bean Sprout’ script, and is in the form of a jātaka or narrative story of a former lifetime of the Buddha, the most popular genre for recitation. The thesis examines the lay manuscript culture of which it is a part and the language and orthography of its contents, and then provide a phonemic transcription and annotated English translation of the text together a complete glossary of terms and images of the manuscript. The detailed study of this one manuscript is used as an entry point for a broader investigation of Lik manuscript culture as found in Mueang Sing today, including the distinct roles of the Lik and Tham orthographies, scribal vocation, manuscript production, uses and functions, and the contents of the texts. The Pukthanusati text is then the basis for examination of phonological aspects of language use in the recitation of Lik manuscript literature and the historical context of the dialect’s phonemes as well as Burmese and Indic forms occurring as loanwords. The Lik Tho Ngok orthography is also placed in context through an overview of its historical development, including possible prototypes and phonological influences, the twentieth century reforms of some traditional orthographies and the question of orthographic depth. The Lik Tho Ngok orthography as found in the manuscript studied is then described in detail, with tables and accompanying notes illustrating the inventories of consonant and vowel glyphs, consonant clusters, ligatures and special orthographic forms, use of subscripts and superscripts, numerals, punctuation, and the Tai Nuea spelling system. The phonemic transcription and annotated English translation of the text illustrate the rhyming structure and other features of specialised language use in Lik manuscript culture. The Tai Nuea and a number of closely related Tai groups have generally been overlooked in the field of Buddhist Studies. The study of this manuscript culture therefore contributes to our understanding of local practices on the northern periphery of Theravāda Buddhist influence in mainland Southeast Asia in addition to responding to an urgent need to examine and document this endangered scribal tradition and it specialised use of language and orthography. KW - Southeast Asia KW - Buddhism KW - Tai KW - Tai Nuea KW - Mueang Sing KW - Laos KW - Manuscript KW - Jataka KW - Apocryphal jataka KW - Linguistics KW - Orthography KW - Lik script KW - Literature KW - Buddhistische Literatur KW - Handschrift KW - Tai-Dehung KW - Linguistik Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5236 ER - TY - THES A1 - Joblin, Mitchell T1 - Structural and Evolutionary Analysis of Developer Networks N2 - Large-scale software engineering projects are often distributed among a number sites that are geographically separated by a substantial distance. In globally distributed software projects, time zone issues, language and cultural barriers, and a lack of familiarity among members of different sites all introduce coordination complexity and present significant obstacles to achieving a coordinated effort. For large-scale software engineering projects to satisfy their scheduling and quality goals, many developers must be capable of completing work items in parallel. A key factor to achieving this goal is to remove interdependencies among work items insofar as possible. By applying principles of modularity, work item interdependence can be reduced, but not removed entirely. As a result of uncertainty during the design and implementation phases and incomplete or misunderstood design intents, dependencies between work items inevitably arises and leads to requirements for developers to coordinate. The capacity of a project to satisfy coordination needs depends on how the work items are distributed among developers and how developers are organizationally arranged, among other factors. When coordination requirements fail to be recognized and appropriately managed, anecdotal evidence and prior empirical studies indicate that this condition results in decreased product quality and developer productivity. In essence, properties of the socio-technical environment, comprised of developers and the tasks they must complete, provides important insights concerning the project's capacity to meet product quality and scheduling goals. In this dissertation, we make contributions to support socio-technical analyses of software projects by developing approaches for abstracting and analyzing the technical and social activities of developers. More specifically, we propose a fine-grained, verifiable, and fully automated approach to obtain a proper view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. To evaluate our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the accuracy of the extracted developer network and the authenticity of the inferred community structure. Our results indicate that developers of open-source projects form statistically significant community structures and this particular network view largely coincides with developers' perceptions. Equipped with a valid network view on developer coordination, we extend our approach to analyze the evolutionary nature of developer coordination. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. We found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which highly central developers are hierarchically arranged and all other developers are not. Our results suggest that the organizational structure of large software projects is constrained to evolve towards a state that balances the costs and benefits of coordination, and the mechanisms used to achieve this state depend on the project's scale. As a final contribution, we use developer networks to establish a richer understanding of the different roles that developers play in a project. Developers of open-source projects are often classified according to core and peripheral roles. Typically, count-based operationalizations, which rely on simple counts of individual developer activities (e.g., number of commits), are used for this purpose, but there is concern regarding their validity and ability to elicit meaningful insights. To shed light on this issue, we investigate whether count-based operationalizations of developer roles produce consistent results, and we validate them with respect to developers' perceptions by surveying 166 developers. We improve over the state of the art by proposing a relational perspective on developer roles, using our fine-grained developer networks, and by examining developer roles in terms of developers' positions and stability within the developer network. In a study of 10 substantial open-source projects, we found that the primary difference between the count-based and our proposed network-based core--peripheral operationalizations is that the network-based ones agree more with developer perception than count-based ones. Furthermore, we demonstrate that a relational perspective can reveal further meaningful insights, such as that core developers exhibit high positional stability, upper positions in the hierarchy, and high levels of coordination with other core developers, which confirms assumptions of previous work. Overall, our research demonstrates that data stored in software repositories, paired with appropriate analysis approaches, can elicit valuable, practical, and valid insights concerning socio-technical aspects of software development. KW - Software Engineering Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4616 ER - TY - THES A1 - Schubach, Sebastian T1 - Caught between two stools? Four essays on value creation and value extraction on free digital platforms N2 - Free digital platforms constitute one of the most important phenomena of modern times; they create value by bringing together customer groups that would not have interacted without digital technology or that could have done so only by incurring increased costs. In the free digital platform model, firms pay for the interaction with end consumers that use the digital platform for free. Extant research on two-sided markets has provided rich evidence for how digital platforms can attract enough members from both customer groups to enable the interaction between the customer groups. However, this research lacks insights into how free digital platforms can create value for their customer groups once these customer groups joined the platform, and extract this value for themselves. To address this substantial research gap, in this dissertation, I investigate the overall research question of how activities of free digital platforms affect the value creation for their customer groups and the ability of the platform to extract this value. In a first step, I examine this value creation and value extraction by focusing on concrete activities of free digital platforms. In Study 1, I investigate how offering firms the possibility of personalizing and positioning their search ads on search engines affects consumers’ search engine click behavior. In Study 2, I examine how adapting ad positions to consumers’ previous online shopping behavior on search engines influences consumers’ click and conversion behavior. In Study 3, I investigate the impact of a review platform’s policy of tagging reviews as written on either mobile or nonmobile devices on consumers’ perceptions of review helpfulness. In a second step, in Study 4, I generalize these findings by investigating the overall impact of such customer-oriented activities on value creation for customer groups and on the extraction of this value by free digital platforms. These four studies yield three major findings. First, free digital platforms’ activities toward one customer group always affect the value creation of the other customer group as well. Second, free digital platforms should emphasize value creation activities especially for non-paying customer groups. Third, internal, operative, and macro-environments influence the value creation and value extraction of free digital platforms. With this dissertation, I make substantial contributions to research on two-sided markets, customer orientation, search engine advertising, and online reviews. In addition, my dissertation provides numerous actionable recommendations for managers of free digital platforms and outlines promising avenues for further research. N2 - Kostenfrei nutzbare Internetplattformen zählen zu den wichtigsten Phänomenen der digitalen Ära. In der Forschung bleiben sie aktuell jedoch weitestgehend unerforscht. Aus diesem Grund beschäftigt sich die vorliegende Dissertation anhand von vier empirischen Studie mit der Frage, wie Anbieter dieser kostenfrei nutzbaren Internetplattformen Wert für ihre Kunden schaffen und Wert für sich abschöpfen können. Die Ergebnisse der Dissertation halten wichtige Implikationen für die Forschung und die Unternehmenspraxis bereit. KW - Two-sided markets KW - Value creation KW - Free e-services KW - Value extraction KW - Digital platforms Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6451 ER - TY - THES A1 - Zhu, Chunjie T1 - Understanding the Formation and Improving the Accuracy of Teacher Judgment N2 - Die Forschung zu Lehrkrafturteilen hat in den letzten drei Jahrzehnten beträchtliche Fortschritte gemacht. Die Bedeutung des Lehrkrafturteils und die Variabilität in der Urteilsgenauigkeit erfordern eine eingehendere Untersuchung. Basierend auf der Überprüfung früherer Studien wurde ein systematischer analytischer Rahmen vorbereitet, der aus drei Hauptstudien besteht, um das Verständnis der Prozesse und Merkmale von Lehrkrafturteilen zu erweitern. In den drei vorgestellten Studien wurde insbesondere untersucht, wie Lehrkrafturteile durch verschiedene Schülermerkmale generiert werden, welche Möglichkeiten es gibt, die Urteilsgenauigkeit von Lehrkräften zu verbessern und ob die Urteilsgenauigkeit von Lehrkräften im Laufe der Zeit stabil bleiben kann. In der ersten Studie wurde das Linsenmodell der Theorie der sozialen Beurteilung angewendet, um die Einschätzungen von Lehrpersonen über die Leistung von Schülerinnen und Schülern und ihre Strategien der Informationsverarbeitung besser zu verstehen. 260 Lehrkräfte aus sieben chinesischen Grundschulen wurden gebeten, aus sieben Informationsquellen Schülermerkmale auszuwählen und zu bewerten, anhand derer sie die Leistungen der Schüler beurteilen könnten. Die Lehrpersonen entwickelten eine klare Hierarchie der verwendeten Datenquellen. Die besten Informationen wurden aus den Fähigkeiten und Einstellungen der Schülerinnen und Schüler gewonnen und die am wenigsten wichtigen Informationen aus der sozialen Interaktion mit anderen sowie aus der Schüler-Demografie. Um genauere Einschätzungen zu treffen, sollten die Lehrkräfte über gültige Indikatoren für die Schülerleistung informiert werden. Die zweite Studie zielte darauf ab, die Urteilsgenauigkeit von Lehrkräften und die Leistung der Schülerinnen und Schüler durch den Einsatz von Classroom-Response-Systemen („Clickern“) zu fördern. 20 Schulklassen mit 459 Schülerinnen und Schülern der sechsten Klasse und ihren Mathematiklehrkräften wurden für eine fünfwöchige quasi-experimentelle Interventionsstudie mit einem Pre- und Post-Test in drei Gruppen eingeteilt. Die Ergebnisse zeigen, dass beide Ziele weitgehend erreicht werden konnten. Schülerinnen und Schüler der Clicker-Gruppe haben durch die Intervention mehr mathematisches Wissen erworben als Studenten der Tagebuch- und Kontrollgruppe. Die Lehrkrafturteile aller drei Gruppen wurden vom Pre- zum Post-Test genauer. Lehrpersonen, die Clicker verwendeten, beurteilten jedoch mit höchster Genauigkeit. Clicker können als wertvolles Werkzeug zur Verbesserung der Urteilsgenauigkeit von Lehrkräften empfohlen werden. In der dritten Studie wurde die zeitliche Stabilität der Urteilsgenauigkeit der Lehrkräfte hinsichtlich Motivation, Emotion und Leistung der Schülerinnen und Schüler untersucht. Neun Klassen mit 326 Sechstklässlern einer chinesischen Grundschule und ihren Mathematiklehrpersonen nahmen an der Studie teil. Die Schüler arbeiteten an einem standardisierten Mathematik-Test und einem Selbstbeschreibungsfragebogen zu Motivation und Emotion. Die Lehrpersonen beurteilten die Motivation, Emotion und Leistung jedes einzelnen Schülers anhand einzelner Items. Das Lehrkrafturteil und die Eigenschaften der Schülerinnen und Schüler wurden innerhalb von vier Wochen zweimal gemessen. Die Ergebnisse zeigten, dass die Lehrkräfte in der Lage waren, die Schülerleistungen mit hoher Genauigkeit, die Motivation der Schülerinnen und Schüler mit mäßiger bis hoher Genauigkeit und die Emotion der Schülerinnen und Schüler meist mit geringer Genauigkeit zu bewerten. Die Urteilsgenauigkeit der Lehrpersonen war sehr stabil mit nur geringen Veränderungen an den verschiedenen Genauigkeitskomponenten. Es kann gefolgert werden, dass chinesische Grundschullehrkräfte in der Lage sind, zu verschiedenen Zeitpunkten faire Urteile über Schülerleistungen und der Motivation ihrer Schülerinnen und Schüler zu treffen. Die Emotionen der Schülerinnen und Schüler sind für Lehrpersonen jedoch schwer zu erfassen. KW - Lehrer KW - Schülerbeurteilung KW - Schulleistungsmessung KW - Urteilsfähigkeit Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7387 ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Posegga, Joachim T1 - RAPID: Resource and API-Based Detection Against In-Browser Miners T2 - Proceedings of the 34th Annual Computer Security Applications Conference N2 - Direct access to the system's resources such as the GPU, persistent storage and networking has enabled in-browser crypto-mining. Thus, there has been a massive response by rogue actors who abuse browsers for mining without the user's consent. This trend has grown steadily for the last months until this practice, i.e., CryptoJacking, has been acknowledged as the number one security threat by several antivirus companies. Considering this, and the fact that these attacks do not behave as JavaScript malware or other Web attacks, we propose and evaluate several approaches to detect in-browser mining. To this end, we collect information from the top 330.500 Alexa sites. Mainly, we used real-life browsers to visit sites while monitoring resource-related API calls and the browser's resource consumption, e.g., CPU. Our detection mechanisms are based on dynamic monitoring, so they are resistant to JavaScript obfuscation. Furthermore, our detection techniques can generalize well and classify previously unseen samples with up to 99.99\% precision and recall for the benign class and up to 96\% precision and recall for the mining class. These results demonstrate the applicability of detection mechanisms as a server-side approach, e.g., to support the enhancement of existing blacklists. Last but not least, we evaluated the feasibility of deploying prototypical implementations of some detection mechanisms directly on the browser. Specifically, we measured the impact of in-browser API monitoring on page-loading time and performed micro-benchmarks for the execution of some classifiers directly within the browser. In this regard, we ascertain that, even though there are engineering challenges to overcome, it is feasible and beneficial for users to bring the mining detection to the browser. KW - Web Security KW - WebRTC KW - postMessage KW - Browser Security KW - Content Security Policy Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6550 SN - 978-1-4503-6569-7 PB - ACM CY - New York, NY, USA ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Schreckling, Daniel A1 - Posegga, Joachim T1 - Addressing Data-Centric Security Requirements for IoT-Based Systems T2 - 2016 International Workshop on Secure Internet of Things (SIoT) N2 - Allowing users to control access to their data is paramount for the success of the Internet of Things; therefore, it is imperative to ensure it, even when data has left the users' control, e.g. shared with cloud infrastructure. Consequently, we propose several state of the art mechanisms from the security and privacy research fields to cope with this requirement. To illustrate how each mechanism can be applied, we derive a data-centric architecture providing access control and privacy guaranties for the users of IoT-based applications. Moreover, we discuss the limitations and challenges related to applying the selected mechanisms to ensure access control remotely. Also, we validate our architecture by showing how it empowers users to control access to their health data in a quantified self use case. KW - Internet of Things KW - Security Architecture KW - Data-Centric Security KW - Differential Privacy KW - Secure Cloud Storage Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6546 SN - 978-1-5090-5091-8 PB - IEEE Xplore CY - Heraklion, Greece ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Posegga, Joachim T1 - Local Storage on Steroids: Abusing Web Browsers for Hidden Content Storage and Distribution T2 - International Conference on Security and Privacy in Communication Systems N2 - Analysing security assumptions taken for the WebRTC and postMessage APIs led us to find a novel attack abusing the browsers' persistent storage capabilities. The presented attack can be executed without the website's visitor knowledge, and it requires neither browser vulnerabilities nor additional software on the browser's side. To exemplify this, we study how can an attacker use browsers to create a network for persistent storage and distribution of arbitrary data. In our proof of concept, the total storage of the network, and therefore the space used within each browser, grows linearly with the number of origins delivering the malicious JavaScript code. Further, data transfers between browsers are not restricted by the Same Origin Policy, which allows for a unified cross-origin browser network, regardless of the origin from which the script executing the functionality is loaded from. In the course of our work, we assess the feasibility of a real-life deployment of the network by running experiments using Linux containers and browser automation tools. Moreover, we show how security mechanisms against third-party tracking, cross-site scripting and click-jacking can diminish the attack's impact, or even prevent it. KW - Web Security KW - WebRTC KW - postMessage KW - Browser Security Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6572 SN - 978-3-030-01704-0 PB - Springer CY - Cham ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Posegga, Joachim T1 - CSP & Co. Can Save Us from a Rogue Cross-Origin Storage Browser Network! But for How Long? T2 - Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy N2 - We introduce a new browser abuse scenario where an attacker uses local storage capabilities without the website's visitor knowledge to create a network of browsers for persistent storage and distribution of arbitrary data. We describe how security-aware users can use mechanisms such as the Content Security Policy (CSP), sandboxing, and third-party tracking protection, i.e., CSP & Company, to limit the network's effectiveness. From another point of view, we also show that the upcoming Suborigin standard can inadvertently thwart existing countermeasures, if it is adopted. KW - Web Security KW - WebRTC KW - PostMessage KW - Browser Security KW - Parasitic Computing Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6561 SN - 978-1-4503-5632-9 PB - ACM CY - New York, NY, USA ER - TY - THES A1 - McLarren, Katharina T1 - Religion and International Society - Approaches to Including Religion in the International Relations Research Agenda N2 - Religion can unite and divide, it can lead to a strengthening or a weakening of identity and legitimacy. Religion can stoke conflicts but it can also pacify them – within societies and in international politics. Religion endures and it can exist independently of states, it can constitute them, and it can provide new forms of states, societies, and empires. Arguably, religion shapes or even constitutes the international society of states, an aspect so far neglected in the field of International Relations. The dissertation provides a new definition of religion for International Relations and the English School in particular. Based upon this understanding of religion, the five publications presented in the dissertation provide new analytical and theoretical concepts and approaches to fill the research gap. Religion is integrated into the theoretical framework of the English School in the form of a “prime institution” and with the help of the “quilt model”. While the former expands the theoretical framework, the latter adds an analytical layer. Based upon this definition religion is also introduced as a concept (“hybrid actorness”) in Foreign Policy Analysis, opening it up to become less state-centrist and more transnational-oriented, thereby boosting its relevance considering the evolving international (global) society. In another step, the Securitization framework of analysis is expanded to include (freedom) of religion. By revisiting the publications, the dissertation is able to identify next steps in terms of avenues of research. Finally, the dissertation reveals areas of study which contribute to increasing the pertinence of IR, particularly of the English School. KW - Religion KW - International Society KW - English School Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12365 ER - TY - THES A1 - Gerl, Armin T1 - Modelling of a Privacy Language and Efficient Policy-based De-identification N2 - The processing of personal information is omnipresent in our data-driven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists. To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement. KW - Privacy Language KW - Personal Privacy KW - Privacy-Preservation KW - GDPR KW - LPL KW - Datenschutz KW - Anonymisierung KW - Pseudonymisierung KW - Formale Sprache KW - Datenschutzgesetz Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7674 ER - TY - THES A1 - Niedermeier, Florian T1 - Power-Adaptive Computing in Future Energy Networks N2 - The current electricity grid is undergoing major changes. There is increasing pressure to move away from power generation from fossil fuels, both due to ecological concerns and fear of dependencies on scarce natural resources. Increasing the share of decentralized generation from renewable sources is a widely accepted way to a more sustainable power infrastructure. However, this comes at the price of new challenges: generation from solar or wind power is not controllable and only forecastable with limited accuracy. To compensate for the increasing volatility in power generation, exerting control on the demand side is a promising approach. By providing flexibility on demand side, imbalances between power generation and demand may be mitigated. This work is concerned with developing methods to provide grid support on demand side while limiting the associated costs. This is done in four major steps: first, the target power curve to follow is derived taking both goals of a grid authority and costs of the respective load into account. In the following, the special case of data centers as an instance of significant loads inside a power grid are focused on more closely. Data center services are adapted in a way such as to achieve the previously derived power curve. By means of hardware power demand models, the required adaptation of hardware utilization can be derived. The possibilities of adapting software services are investigated for the special use case of live video encoding. A method to minimize quality of experience loss while reducing power demand is presented. Finally, the possibility of applying probabilistic model checking to a continuous demand-response scenario is demonstrated. KW - Power-adaptive software KW - Energy systems KW - Energieversorgung KW - Software Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9993 ER - TY - THES A1 - Bongers, Franziska Maria T1 - Three essays on digital and non-digital transformations in business-to-business markets N2 - Fundamental changes in business-to-business (B2B) buying behavior confront B2B supplier firms with unprecedented challenges. On the one hand, a rising share of industrial buyers demands digitalized offerings and processes from suppliers. Consequently, suppliers are urged to implement digital transformations by expanding the range of both digital offerings and processes. On the other hand, B2B buyers increasingly expect suppliers to provide individually tailored solutions to their idiosyncratic needs. Hence, suppliers are also required to implement non-digital transformations by providing offerings and processes that are customized to each customers’ specific requirements. The rise of these digital and non-digital transformations calls established knowledge into question. Thus, B2B marketing research and practice are urged to create a comprehensive understanding of digital and non-digital transformations by means of novel and empirically grounded insights and derive actionable response strategies. In respond, my dissertation addresses the overall research question of how B2B supplier firms can successfully implement both digital and non-digital transformations in three individual essays. In Essay 1, I offer a broader perspective on both digital and non-digital transformations by investigating digital service customization (i.e., the tailoring of digital B2B services to customers’ individual needs). Through a systematic literature review and bibliometric analysis, I outline a comprehensive set of factors that favor the application of distinct digital service customization strategies. Essay 2 represents a deep dive into digital transformations of sales processes. By making use of two rich sets of qualitative interview material from supplier and buyer firms, I identify the challenges resulting for B2B salespeople from the introduction of digital sales channels into personal selling. Moreover, I uncover facilitating mechanisms that sales managers can employ to support salespeople in coping with digital sales channels. Finally, Essay 3 constitutes a deep dive into non-digital transformations. Based on qualitative interview material and survey data from matched sales manager–salesperson dyads, the essay explores how configurations of individual salespeople’s personal and procedural competencies facilitate success at selling customer solutions (i.e., highly customized, performance-oriented offerings comprising products and/or services). The essay shows that successfully selling customized offerings like solutions hinges on salespeople’s unique configurations of present and absent competencies. In a nutshell, these essays provide three major insights on how B2B suppliers can successfully implement digital and non-digital transformations. First, they underscore that a comprehensive understanding of the origins and spillover effects of transformations is a key prerequisite to successfully implementing them. Second, they unveil that digital and non-digital transformations impact on multiple organizational levels. Third, they point out important resources and capabilities that help suppliers to successfully implement transformations, be they digital or non-digital. With this dissertation, I make substantial contributions to the broader literature on digital and non-digital transformations in B2B contexts. At the same time, my dissertation provides hands-on implications for managers in B2B supplier firms that are facing fundamental transformations in the marketplace—both digital and non-digital in nature. KW - business-to-business; digitalization; customization; digital services; digital sales channels; customer solutions Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9149 ER - TY - THES A1 - Salehi Rizi, Fatemeh T1 - Graph Representation Learning for Social Networks N2 - Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions. Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks. Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level. Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer. Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9211 ER - TY - JOUR A1 - Frank, Florian A1 - Böttger, Simon A1 - Mexis, Nico A1 - Anagnostopoulos, Nikolaos Athanasios A1 - Mohamed, Ali A1 - Hartmann, Martin A1 - Kuhn, Harald A1 - Helke, Christian A1 - Arul, Tolga A1 - Katzenbeisser, Stefan A1 - Hermann, Sascha T1 - CNT-PUFs: highly robust and heat-tolerant carbon-nanotube-based physical unclonable functions N2 - In this work, we explored a highly robust and unique Physical Unclonable Function (PUF) based on the stochastic assembly of single-walled Carbon NanoTubes (CNTs) integrated within a wafer-level technology. Our work demonstrated that the proposed CNT-based PUFs are exceptionally robust with an average fractional intra-device Hamming distance well below 0.01 both at room temperature and under varying temperatures in the range from 23 °C to 120 °C. We attributed the excellent heat tolerance to comparatively low activation energies of less than 40 meV extracted from an Arrhenius plot. As the number of unstable bits in the examined implementation is extremely low, our devices allow for a lightweight and simple error correction, just by selecting stable cells, thereby diminishing the need for complex error correction. Through a significant number of tests, we demonstrated the capability of novel nanomaterial devices to serve as highly efficient hardware security primitives. KW - Carbon NanoTube (CNT) KW - Physical Unclonable Function (PUF) KW - Nanomaterials (NMs) KW - hardware security KW - security KW - privacy KW - Internet of Things (IoT) Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14011 VL - 2023 IS - 13(22) PB - MDPI CY - Basel ER - TY - JOUR A1 - Hassen, Wiem Fekih A1 - Ben Ahmed, Mariem T1 - Optimization of a Redox-Flow Battery Simulation Model Based on a Deep Reinforcement Learning Approach JF - Batteries N2 - Vanadium redox-flow batteries (VRFBs) have played a significant role in hybrid energy storage systems (HESSs) over the last few decades owing to their unique characteristics and advantages. Hence, the accurate estimation of the VRFB model holds significant importance in large-scale storage applications, as they are indispensable for incorporating the distinctive features of energy storage systems and control algorithms within embedded energy architectures. In this work, we propose a novel approach that combines model-based and data-driven techniques to predict battery state variables, i.e., the state of charge (SoC), voltage, and current. Our proposal leverages enhanced deep reinforcement learning techniques, specifically deep q-learning (DQN), by combining q-learning with neural networks to optimize the VRFB-specific parameters, ensuring a robust fit between the real and simulated data. Our proposed method outperforms the existing approach in voltage prediction. Subsequently, we enhance the proposed approach by incorporating a second deep RL algorithm—dueling DQN—which is an improvement of DQN, resulting in a 10% improvement in the results, especially in terms of voltage prediction. The proposed approach results in an accurate VFRB model that can be generalized to several types of redox-flow batteries. KW - energy storage KW - redox-flow battery KW - battery modeling KW - battery state variables KW - parameter optimization KW - accurate estimation KW - voltage prediction KW - deep reinforcement learning KW - deep q-learning KW - dueling deep q-networks Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13994 VL - 10 PB - MDPI CY - Basel ER - TY - JOUR A1 - Rowedder, Simon A1 - Wilcox, Phill A1 - Brandtstädter, Susanne T1 - Negotiating Chinese Infrastructures of Modern Mobilities: Insights from Southeast Asia JF - Advances in Southeast Asian Studies (ASEAS) N2 - Since the launch of the BRI, particular modes of movement are integral to its vision of what it means to be a modern world citizen. Nowhere is this more apparent than in Southeast Asia, where China-backed infrastructure projects expand, and at great speed. Such infrastructure projects are carriers of particular versions of modernity, promising rapid mobility to populations better connected than ever before. Yet, until now, little attention has been paid to how mobility and promises of mobility intersect with local understandings of development. In the introduction to this special issue, we argue that it is essential to think about the role infrastructure plays in forms of development that place connectivity at the center. We suggest that considering development, mobility and mo-dernity together is enlightening because it interrogates the connections between these interlocking themes. Through an introduction to five ethnographically grounded papers and two commentaries, all of which engage with infrastructures in different contexts throughout Southeast Asia, we demonstrate that there are significant gaps between of-ficial policy and lived experience. This makes the need to interrogate what infrastructure, mobilities, and global China really mean all the more pressing. KW - Belt and Road Initiative (BRI) KW - China-Backed Infrastructure KW - Development KW - Mobility KW - Southeast Asia Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13973 VL - vol. 16 IS - no. 2 SP - 175 EP - 188 ER - TY - JOUR A1 - Hassen, Wiem Fekih A1 - Imen Azzouz, Imen Azzouz T1 - Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach JF - Energies N2 - The worldwide adoption of Electric Vehicles (EVs) has embraced promising advancements toward a sustainable transportation system. However, the effective charging scheduling of EVs is not a trivial task due to the increase in the load demand in the Charging Stations (CSs) and the fluctuation of electricity prices. Moreover, other issues that raise concern among EV drivers are the long waiting time and the inability to charge the battery to the desired State of Charge (SOC). In order to alleviate the range of anxiety of users, we perform a Deep Reinforcement Learning (DRL) approach that provides the optimal charging time slots for EV based on the Photovoltaic power prices, the current EV SOC, the charging connector type, and the history of load demand profiles collected in different locations. Our implemented approach maximizes the EV profit while giving a margin of liberty to the EV drivers to select the preferred CS and the best charging time (i.e., morning, afternoon, evening, or night). The results analysis proves the effectiveness of the DRL model in minimizing the charging costs of the EV up to 60%, providing a full charging experience to the EV with a lower waiting time of less than or equal to 30 min. KW - smart EV charging KW - day-ahead planning KW - deep Q-Network KW - data-driven approach KW - waiting time KW - cost minimization KW - real dataset Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13985 VL - 16 PB - MDPI CY - Basel ER - TY - THES A1 - Dengel, Andreas T1 - Effects of Immersion and Presence on Learning Outcomes in Immersive Educational Virtual Environments for Computer Science Education T1 - Effekte von Immersion und Präsenz auf Lernerfolg in immersiven virtuellen Lernumgebungen zu Themen der Informatik N2 - Abstract concepts and ideas from Computer Science Education can benefit from immersive visualizations that can be provided in virtual environments. This thesis explores the effects of the key characteristics of virtual environments, immersion and presence, on learning outcomes in Educational Virtual Environments for learning Computer Science. Immersion is a quantifiable description of the technology to immerse the user into the virtual environment; presence describes the subjective feeling of 'being there'. While technological immersion can be seen as a strong predictor for presence, motivational traits, cognition, and the emotional state of the user also influence presence. A possible localization of these technological and person-specific variables in Helmke's pedagogical supply-use framework is introduced as the Educational Framework for Immersive Learning (EFiL). Presence is emphasized as a central criterion influencing immersive learning processes. The EFiL provides an educational understanding of immersive learning as learning activities initiated by a mediated or medially enriched environment that evokes a sense of presence. The idea of Computer Science Unplugged is pursued by using Virtual Reality technology in order to provide interactive virtual learning experiences that can be accurately displayed, schematizing, substantiating, or metaphorical. For exploring the effects of virtual environment characteristics on learning, the idea of Computer Science Replugged focuses 'hands-on' activities and combines them with immersive technology. By providing a perception of non-mediation, Computer Science Replugged might enable experiences that can contribute additional possibilities to the real activity or enable new activities for teaching Computer Science. Three game-based Educational Virtual Environments were developed as treatments: 'Bill's Computer Workshop' introduces the components of a computer; 'Fluxi's Cryptic Potions' uses a metaphor to teach asymmetric encryption; 'Pengu's Treasure Hunt' is an immersive visualization of finite state machines. A first study with 23 middle school students was conducted to test the instruments in terms of selectivity, the devices' induced levels of presence, and adequacy of the selected learning objectives. The second study with 78 middle school students playing the environments on different devices (laptop, Mobile Virtual Reality, or head-mounted-display) assessed motivational, cognitive, and emotional factors, as well as presence and learning outcomes. An overall analysis showed that pre-test performance, presence, and the previous scholastic performance in Maths and German predict the learning outcomes in the virtual environments. Presence could be predicted by the student's positive emotions and by the technological immersion. The level of immersion had no significant effect on learning outcomes. While a good-fitting path analysis model indicated that the assumed relations deriving from the EFiL are largely correct for 'Bill's Computer Workshop' and 'Fluxi's Cryptic Potions', not all results of the overall path analysis were significant for the analyses of the particular environments. Presence seems to have a small effect on learning outcomes while being influenced by technological and emotional factors. Even though the level of immersion can be used to predict the level of presence, it is not an appropriate predictor for learning outcomes. For future studies, the questionnaires have to be revised as some of them suffered from poor scale reliabilities. While the second study could provide indications that the localization of presence and immersion in an existing educational supply-use framework seems to be appropriate, many factors had to be blanked out. The thesis contributes to existing research as it adds factors that are crucial for learning processes to the discussion on immersive learning from an educational perspective and assesses these factors in hands-on activities in Educational Virtual Environments for Computer Science Education. KW - Virtual Reality KW - Supply-Use-Model KW - Immersion KW - Presence KW - Computer Science Education Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8413 ER - TY - THES A1 - Calmels, Dorothea T1 - Job Sequencing and Tool Switching Problems with a Generalisation to Non-Identical Parallel Machines N2 - Manufacturing tools have been dominating the manufacturing process since the 1960s. The job sequencing and tool switching problem is an NP-hard combinatorial optimization that has first been introduced in the context of flexible manufacturing systems in the late 1980s. Since then, production systems have undisputedly changed and improved but manufacturing tools still dominate manufacturing processes. Production and system operation processes are continuously adjusted and optimised to changing customer requirements. If the product variety requires an increasing number of tools for processing that exceeds the local tool magazine capacity of the manufacturing system, tool switches become necessary. Although tool changing times within a manufacturing centre or cell may nowadays be very small due to the high degree of automation, tool switching within a dynamic production environment is still a time consuming process that must be avoided. In order to minimize the total tool setup time to enhance productivity, the objectives of the basic job sequencing and tool switching problem are to sequence a set of jobs and simultaneously to determine the best tool loading. Therefore, job sequencing and tool switching problems are gaining considerable attention. Several solution approaches to the standard problem and related versions of the problem exist. The first part of this dissertation assesses the current state-of-the-art of the job sequencing and tool switching problem and provides a classification scheme for literature on the job sequencing and tool switching problem and its variations. Only few authors consider generalisations of the problem because the level of complexity of extended problems is high. A general approach of the job sequencing and tool switching problem with non-identical parallel machines and sequence-dependent setup times is described in this dissertation. A novel mathematical model based on time periods is presented and analysed which can be adapted to different objective functions. The last part of this dissertation is a quantitative evaluation of fast and effective construction heuristics as well as of an iterated local search algorithm tested on a new set of benchmark instances. As such this dissertation provides a broad basis for future evaluations of solution approaches to the job sequencing and tool switching problem with non-identical parallel machines and sequence-dependent setup times as well as a basis for further generalisations of the problem like for example tool availability constraints or tool-size dependent variations. Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8436 ER - TY - THES A1 - Schlötterer, Jörg T1 - Supporting the Discovery of Long-Tail Resources on the Web N2 - A plethora of resources made available via retrieval systems in digital libraries remains untapped in the so called long tail of the Web. These long-tail websites get considerably less visits than major Web hubs. Zero-effort queries ease the discovery of long-tail resources by proactively retrieving and presenting information based on a user’s context. However, zero-effort queries over existing digital library structures are challenging, since the underlying retrieval system is only accessible via an API. The information need must be expressed by a query, instead of optimizing the ranking between context and resources in the retrieval system directly. We address three research questions that arise from replacing the user information seeking process by zero-effort queries. Our first question addresses the transformation of a user query to an automatic query, derived from the context. We present means to 1) identify the relevant context on different levels of granularity, 2) derive an information need from the context via keyword extraction and personalization and 3) express this information need in a query scheme that avoids over- or under-specified queries. We address the cold start problem with an approach to bootstrap user profiles from social media, even for passive users. With the second question, we address the presentation of resources in zero-effort query scenarios, presenting guidelines for presentation interfaces in the browser and a visualization of the triadic relationship between context, query and results. QueryCrumbs, a compact query history visualization supports recalling information found in the past and exploratory search by visualizing qualitative and quantitative query similarity. Our last question addresses the gap between (simple) keyword queries and the representation of resources by rich and complex meta-data. We investigate and extend feature representation learning techniques centered around the skip-gram model with negative sampling. Finally, we present an approach to learn representations from network and text jointly that can cope with the partial absence of one modality. Experimental results show close to human performance of our zero-effort query and user profile generation approach and visualizations to be helpful in terms of transparency, efficiency and support for exploratory search. These results indicate that the proposed zero-effort query approach indeed eases the discovery of long-tail resources and the accompanying visualizations further facilitate this process. The joint representation model provides a first step to bridge the gap between query and resource representation and we plan to follow and investigate this route further in the future. KW - Data Sience KW - Big Data KW - Information Retrieval Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8539 ER - TY - THES A1 - Ganser, Stefan T1 - Iterative Schedule Optimization for Parallelization in the Polyhedron Model N2 - In high-performance computing, one primary objective is to exploit the performance that the given target hardware can deliver to the fullest. Compilers that have the ability to automatically optimize programs for a specific target hardware can be highly useful in this context. Iterative (or search-based) compilation requires little or no prior knowledge and can adapt more easily to concrete programs and target hardware than static cost models and heuristics. Thereby, iterative compilation helps in situations in which static heuristics do not reflect the combination of input program and target hardware well. Moreover, iterative compilation may enable the derivation of more accurate cost models and heuristics for optimizing compilers. In this context, the polyhedron model is of help as it provides not only a mathematical representation of programs but, more importantly, a uniform representation of complex sequences of program transformations by schedule functions. The latter facilitates the systematic exploration of the set of legal transformations of a given program. Early approaches to purely iterative schedule optimization in the polyhedron model do not limit their search to schedules that preserve program semantics and, thereby, suffer from the need to explore numbers of illegal schedules. More recent research ensures the legality of program transformations but presumes a sequential rather than a parallel execution of the transformed program. Other approaches do not perform a purely iterative optimization. We propose an approach to iterative schedule optimization for parallelization and tiling in the polyhedron model. Our approach targets loop programs that profit from data locality optimization and coarse-grained loop parallelization. The schedule search space can be explored either randomly or by means of a genetic algorithm. To determine a schedule's profitability, we rely primarily on measuring the transformed code's execution time. While benchmarking is accurate, it increases the time and resource consumption of program optimization tremendously and can even make it impractical. We address this limitation by proposing to learn surrogate models from schedules generated and evaluated in previous runs of the iterative optimization and to replace benchmarking by performance prediction to the extent possible. Our evaluation on the PolyBench 4.1 benchmark set reveals that, in a given setting, iterative schedule optimization yields significantly higher speedups in the execution of the program to be optimized. Surrogate performance models learned from training data that was generated during previous iterative optimizations can reduce the benchmarking effort without strongly impairing the optimization result. A prerequisite for this approach is a sufficient similarity between the training programs and the program to be optimized. KW - Parallelrechner KW - Optimiererung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7936 ER - TY - THES A1 - Kasinathan, Prabhakaran T1 - Workflow-aware access control for the Internet of Things N2 - IoT is defined as a paradigm where "things" have sensing, actuating, communicating, and self-configuring abilities, and are connected to each other and to the Internet. Recent advancements in the manufacturing industry have helped to produce embedded devices with various sensors and actuators in mass numbers at a reduced cost. As part of the IoT revolution, everyday devices such as television, refrigerator, cars, even industrial machines are now connected IoT devices. Recent studies have predicted that by 2025 there will be over 75 billion of such IoT devices connected to the Internet. The providers of IoT based services want to integrate their services to satisfy customer requirements. For example, in the mobility scenario, different mobility solution providers want to offer a multi-modal ticket to their customers jointly. In such a distributed and loosely coupled environment, each owner and stakeholder wants to secure his/her own integrity, confidentiality, and functionality goals. This means that distributed rules and conditions defined by the individual owners must be enforced on the participating entities (e.g., customers or partners using their services). The owners and stakeholders may not necessarily trust each other's actions. Therefore, a mechanism is required that guarantees the rules and conditions specified by the different owners. Attacks on IoT devices and similar computing systems are increasing and getting more advanced. IoT devices are often constrained, i.e., they have limited processing power, memory, and energy. Security mechanisms designed for traditional computing systems, e.g., computers, servers, or mobile computing devices such as smartphones, may not fit in those constrained IoT devices. Weak security mechanisms and unenforced security measures were one of the main reasons for recent successful attacks on IoT devices and services. As IoT is now used in many sensitive places, including critical infrastructures, securing them becomes more critical than ever. This thesis focuses on developing mechanisms that secure IoT devices and services and enforcing the rules and conditions specified by the owners on entities that want to access owners' resources. In classical computer systems, security automata are used for specifying security policies and monitoring mechanisms are used for enforcing such policies. For instance, a reference monitor observes and stops the execution when the security policies are about to be violated, thus, the security policies are enforced. To restrict the adversary from using protected IoT devices or services for malicious purposes, it is required to ensure that a workflow must be followed to access the protected resource. In distributed IoT systems where the policies are governed by different owners, each owner would like to specify their rules and conditions in their workflows. The workflows contain tasks that must be performed in a particular order. The goal of this thesis is to develop mechanisms to specify and enforce these workflows in the distributed IoT environment. This thesis introduces a distributed WFAC framework that restricts the entities to do only what they are allowed to do in a collaborative environment. To gain access to a service protected by the WFAC framework, every workflow participant must prove that he/she is in a particular state of an authorized workflow. Authorized means two things: (a) the owner has authorized the workflow to be executed; (b) the workflow participant is authorized to execute it. This restricts the adversary's access to the devices and its services. The security policies defined by different owners are modeled as workflows and specified using Petri Nets. The policies are then enforced with the help of the WFAC framework which supports error-handling, accountability, integration of practitioner-friendly tools, and interoperability with existing security mechanisms such as OAuth. Thus, the WFAC guarantees the integrity of workflows in a distributed environment. KW - Workflow-Aware Access Control for the Internet of Things KW - Petri Nets, Blockchain, Security Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8915 ER - TY - THES A1 - Tueno, Anselme T1 - Multiparty Protocols for Tree Classifiers N2 - Cryptography is the scientific study of techniques for securing information and communication against adversaries. It is about designing and analyzing encryption schemes and protocols that protect data from unauthorized reading. However, in our modern information-driven society with highly complex and interconnected information systems, encryption alone is no longer enough as it makes the data unintelligible, preventing any meaningful computation without decryption. On the one hand, data owners want to maintain control over their sensitive data. On the other hand, there is a high business incentive for collaborating with an untrusted external party. Modern cryptography encompasses different techniques, such as secure multiparty computation, homomorphic encryption or order-preserving encryption, that enable cloud users to encrypt their data before outsourcing it to the cloud while still being able to process and search on the outsourced and encrypted data without decrypting it. In this thesis, we rely on these cryptographic techniques for computing on encrypted data to propose efficient multiparty protocols for order-preserving encryption, decision tree evaluation and kth-ranked element computation. We start with Order-preserving encryption (OPE) which allows encrypting data, while still enabling efficient range queries on the encrypted data. However, OPE is symmetric limiting, the use case to one client and one server. Imagine a scenario where a Data Owner (DO) outsources encrypted data to the Cloud Service Provider (CSP) and a Data Analyst (DA) wants to execute private range queries on this data. Then either the DO must reveal its encryption key or the DA must reveal the private queries. We overcome this limitation by allowing the equivalent of a public-key OPE. Decision trees are common and very popular classifiers because they are explainable. The problem of evaluating a private decision tree on private data consists of a server holding a private decision tree and a client holding a private attribute vector. The goal is to classify the client’s input using the server’s model such that the client learns only the result of the classification, and the server learns nothing. In a first approach, we represent the tree as an array and execute only d interactive comparisons (instead of 2 d as in existing solutions), where d denotes the depth of the tree. In a second approach, we delegate the complete tree evaluation to the server using somewhat or fully homomorphic encryption where the ciphertexts are encrypted under the client’s public key. A generalization of a decision tree is a random forest that consists of many decision trees. A classification with a random forest evaluates each decision tree in the forest and outputs the classification label which occurs most often. Hence, the classification labels are ranked by their number of occurrences and the final result is the best ranked one. The best ranked element is a special case of the kth-ranked element. In this thesis, we consider the secure computation of the kth-ranked element in a distributed setting with applications in benchmarking and auctions. We propose different approaches for privately computing the kth-ranked element in a star network, using either garbled circuits or threshold homomorphic encryption. KW - Mathematik KW - Kryptologie Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8251 ER - TY - THES A1 - Schmid, Angelika T1 - Geographic and Social Space in Latent Factor Models - Four Essays N2 - Geography, social context, time, and cultural mindset are four (out of many) cornerstones of human interaction. When building statistical models, their consideration is vital: They all cause dependency between individual observations, violating assumptions of independence and exchangeability. While this can be problematic and inhibit the unbiased inference of parameters, it can also be a fruitful source of insights and enhance prediction performance. One class of models that serves to manage or profit from the presence of dependence is the class of latent variable models. This class of models assumes that the presence of non-explicit, unobserved causes of continuous or discrete nature can explain the observed correlations. Latent variable models explicitly take account of dependency, for example, by modeling an unobserved local source of pollution as a continuous spatial variable. Through their widespread use for information fitering, link prediction, and statistical inference, latent variable models have developed an essential impact on our daily life and the way we consume information. The four articles in this thesis shed light on assumptions, usage, and potential drawbacks of latent variable models in various contexts that involve geographic and interaction data. We model unobserved sources of pollution in geophysical data, explore individual taste and mindsets in cross-cultural contexts, and predict the evolution of social relationships in software development projects. This combination of various perspectives contributes to the interdisciplinary exchange of methodological knowledge on the modeling of dependent data. Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7947 ER - TY - THES A1 - Parra Rodriguez, Juan David T1 - Computational Resource Abuse in Web Applications N2 - Internet browsers include Application Programming Interfaces (APIs) to support Web applications that require complex functionality, e.g., to let end users watch videos, make phone calls, and play video games. Meanwhile, many Web applications employ the browser APIs to rely on the user's hardware to execute intensive computation, access the Graphics Processing Unit (GPU), use persistent storage, and establish network connections. However, providing access to the system's computational resources, i.e., processing, storage, and networking, through the browser creates an opportunity for attackers to abuse resources. Principally, the problem occurs when an attacker compromises a Web site and includes malicious code to abuse its visitor's computational resources. For example, an attacker can abuse the user's system networking capabilities to perform a Denial of Service (DoS) attack against third parties. What is more, computational resource abuse has not received widespread attention from the Web security community because most of the current specifications are focused on content and session properties such as isolation, confidentiality, and integrity. Our primary goal is to study computational resource abuse and to advance the state of the art by providing a general attacker model, multiple case studies, a thorough analysis of available security mechanisms, and a new detection mechanism. To this end, we implemented and evaluated three scenarios where attackers use multiple browser APIs to abuse networking, local storage, and computation. Further, depending on the scenario, an attacker can use browsers to perform Denial of Service against third-party Web sites, create a network of browsers to store and distribute arbitrary data, or use browsers to establish anonymous connections similarly to The Onion Router (Tor). Our analysis also includes a real-life resource abuse case found in the wild, i.e., CryptoJacking, where thousands of Web sites forced their visitors to perform crypto-currency mining without their consent. In the general case, attacks presented in this thesis share the attacker model and two key characteristics: 1) the browser's end user remains oblivious to the attack, and 2) an attacker has to invest little resources in comparison to the resources he obtains. In addition to the attack's analysis, we present how existing, and upcoming, security enforcement mechanisms from Web security can hinder an attacker and their drawbacks. Moreover, we propose a novel detection approach based on browser API usage patterns. Finally, we evaluate the accuracy of our detection model, after training it with the real-life crypto-mining scenario, through a large scale analysis of the most popular Web sites. KW - Web Security KW - Computational Resource Abuse KW - Crypto Currency Mining KW - Parasitic Computing KW - Computersicherheit KW - Browser Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7706 ER - TY - THES A1 - Jurgovsky, Johannes T1 - Context-Aware Credit Card Fraud Detection N2 - Credit card fraud has emerged as major problem in the electronic payment sector. In this thesis, we study data-driven fraud detection and address several of its intricate challenges by means of machine learning methods with the goal to identify fraudulent transactions that have been issued illegitimately on behalf of the rightful card owner. In particular, we explore several means to leverage contextual information beyond a transaction’s basic attributes on the transaction level, sequence level and user level. On the transaction level, we aim to identify fraudulent transactions which, in terms of their attribute values, are globally distinguishable from genuine transactions. We provide an empirical study of the influence of class imbalance and forecasting horizons on the classification performance of a random forest classifier. We augment transactions with additional features extracted from external knowledge sources and show that external information about countries and calendar events improves classification performance most noticeably on card-not-present transactions. On the sequence level, we aim to detect frauds that are inconspicuous in the background of all transactions but peculiar with respect to the short-term sequence they appear in. We use a Long Short-term Memory network (LSTM) for modeling the sequential succession of transactions. Our results suggest that LSTM-based modeling is a promising strategy for characterizing sequences of card-present transactions but it is not adequate for card-not-present transactions. On the user level, we elaborate on feature aggregations and propose a flexible concept allowing us define numerous features by means of a simple syntax. We provide a CUDA-based implementation for the computationally expensive extraction with a speed-up of two orders of magnitude over a single-core implementation. Our feature selection study reveals that aggregates extracted from users’ transaction sequences are more useful than those extracted from merchant sequences. Moreover, we discover multiple sets of candidate features with equivalent performance as manually engineered aggregates while being structurally different. Regarding future work, we motivate the usage of simple and transparent machine learning methods for credit card fraud detection and we sketch a simple user-focused modeling approach. KW - Credit Card Fraud Detection KW - Machine Learning KW - Data Augmentation KW - Feature Engineering KW - Kreditkartenmissbrauch KW - Computersicherheit Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7622 ER - TY - THES A1 - Taubmann, Benjamin T1 - Improving Digital Forensics and Incident Analysis in Production Environments by Using Virtual Machine Introspection N2 - Main memory forensics and its special form, virtual machine introspection (VMI), are powerful tools for digital forensics and can be used to improve the security of computer-based systems. However, their use in production systems is often not possible. This work identifies the causes and offers practical solutions to apply these techniques in cloud computing and on mobile devices to improve digital forensics and incident analysis. Four key challenges must be tackled. The first challenge is that many existing solutions are not reproducible, for example, because the corresponding software components are not available, obsolete or incompatible. The use of these tools is also often complex and can lead to a crash of the system to be monitored in case of incorrect use. To solve this problem, this thesis describes the design and implementation of Libvmtrace, which is a framework for the introspection of Linux-based virtual machines. The focus of the developed design is to implement frequently used methods in encapsulated modules so that they are easy for developers to use, optimize and test. The second challenge is that many production systems do not provide an interface for main memory forensics and virtual machine introspection. To address this problem, this thesis describes possible solutions for how such an interface can be implemented on mobile devices and in cloud environments designed to protect main memory from unprivileged access. We discuss how cold boot attacks, the ARM TrustZone and the hypervisor of cloud servers can be used to acquire data from storage. The third challenge is how to reconstruct information from main memory efficiently. This thesis describes how these questions can be solved by employing two practical examples. The first example involves extracting the keys of encrypted TLS connections from the main memory of applications to decrypt network traffic without affecting the performance of the monitored application. The TLSKex and DroidKex architecture describe two approaches to localize the keys efficiently with the help of semantic knowledge in the main memory of applications. The second example discusses how to monitor and document SSH sessions of potential attackers from outside of a virtual machine. It is important that the monitoring routines are not noticed by an attacker. To achieve this, we evaluate how to optimize the performance of the monitoring mechanism. The fourth challenge is how to deal with the performance degradation caused by introspection in productive systems. This thesis discusses how this can be achieved using the example of a SIEM system. To reduce the performance overhead, we describe how to configure the monitoring routine to collect only the information needed to detect incidents. Also, we describe two approaches that permit the monitoring routine to be dynamically adjusted at runtime to extract more information if necessary so that incidents can be better analyzed. KW - Digital Forensics KW - Virtual Machine Introspection KW - Production Environments KW - Incident Detection KW - Computerforensik KW - Eindringerkennung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8319 ER - TY - THES A1 - Horáček, Jan T1 - Algebraic and Logic Solving Methods for Cryptanalysis N2 - Algebraic solving of polynomial systems and satisfiability of propositional logic formulas are not two completely separate research areas, as it may appear at first sight. In fact, many problems coming from cryptanalysis, such as algebraic fault attacks, can be rephrased as solving a set of Boolean polynomials or as deciding the satisfiability of a propositional logic formula. Thus one can analyze the security of cryptosystems by applying standard solving methods from computer algebra and SAT solving. This doctoral thesis is dedicated to studying solvers that are based on logic and algebra separately as well as integrating them into one such that the combined solvers become more powerful tools for cryptanalysis. This disseration is divided into three parts. In this first part, we recall some theory and basic techniques for algebraic and logic solving. We focus mainly on DPLL-based SAT solving and techniques that are related to border bases and Gröbner bases. In particular, we describe in detail the Border Basis Algorithm and discuss its specialized version for Boolean polynomials called the Boolean Border Basis Algorithm. In the second part of the thesis, we deal with connecting solvers based on algebra and logic. The ultimate goal is to combine the strength of different solvers into one. Namely, we fuse the XOR reasoning from algebraic solvers with the light, efficient design of SAT solvers. As a first step in this direction, we design various conversions from sets of clauses to sets of Boolean polynomials, and vice versa, such that solutions and models are preserved via the conversions. In particular, based on a block-building mechanism, we design a new blockwise algorithm for the CNF to ANF conversion which is geared towards producing fewer and lower degree polynomials. The above conversions allow usto integrate both solvers via a communication interface. To reach an even tighter integration, we consider proof systems that combine resolution and polynomial calculus, i.e. the two most used proof systems in logic and algebraic solving. Based on such a proof system, which we call SRES, we introduce new types of solving algorithms that demostrate the synergy between Gröbner-like and DPLL-like solving. At the end of the second part of the dissertation, we provide some experiments based on a new benchmark which illustrate that the our new method based on DPLL has the potential to outperform CDCL SAT solvers. In the third part of the thesis, we focus on practical attacks on various cryptograhic primitives. For instance, we apply SAT solvers in the case of algebraic fault attacks on the symmetric ciphers LED and derivatives of the block cipher AES. The main goal there is to derive so-called fault equations automatically from the hardware description of the cryptosystem and thus automatizate the attack. To give some extra power to a SAT solver that inverts the hash functions SHA-1 and SHA-2, we describe how to tweak the SAT solver using a programmatic interface such that the propagation of the solver and thus the attack itself is improved. KW - Boolean polynomial KW - Border basis KW - SAT solving KW - Combined proof system KW - Algebraic normal form KW - Conjunctive normal form KW - Algebraic fault attack KW - Kryptoanalyse KW - Polynom KW - Beweissystem Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7731 ER - TY - THES A1 - Keren, Gil T1 - Neural Network Supervision: Notes on Loss Functions, Labels and Confidence Estimation N2 - We consider a number of enhancements to the standard neural network training paradigm. First, we show that carefully designed parameter update rules may replace the need for a loss function and its gradient. We introduce a parameter update rule that generalises the standard cross-entropy gradient, and allows directly controlling the relative effect of easy and hard examples on the training process. We show that the proposed update rule cannot be derived by using a loss function and yields better classification accuracy compared to training with the standard cross-entropy loss. In addition, we study the effect of the loss function choice on the learnt representations. We introduce the Single Logit Classification (SLC) task: classifying whether a given class is the correct class for a given example, in a computationally efficient manner, based on the appropriate class logit alone. A natural principle is proposed, the Principle of Logit Separation (PoLS), as a guideline for choosing and designing loss functions suitable for the SLC task. We mathematically analyse the alignment of eleven existing and novel loss functions with this principle. Experiment results show that using loss functions that are aligned with this principle results in a representation in the logits layer in which each logit is more informative of its class correctness, leading to a considerably better SLC accuracy. Further, we attempt to alleviate the dependency of standard neural network models on large amounts of quality labels. The task of weakly supervised one-shot detection is considered, in which at training time the model is trained without any localisation labels, and at test time it needs to identify and localise instances of unseen classes. We propose the attention similarity networks (ASN) for this task. ASN use a Siamese neural network to compute a similarity score between an exemplar and different locations in a target example. Then, an attention mechanism performs localisation by learning to attend to the correct locations. The ASN model outperforms the relevant baselines for weakly supervised one-shot detection tasks in the audio and computer vision domains. Finally, we consider the problem of quantifying prediction confidence in the regression setting. We propose two novel algorithms for emitting calibrated prediction intervals for neural network regressors, at any given confidence level. The two algorithms require binning of the output space and training the neural network regressor as a classifier. Then, the calibration algorithms choose the intervals in the output space, making sure they contain the amount of posterior probability mass that results in the desired confidence level. KW - Neuronales Netz KW - Maschinelles Lernen Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8223 ER - TY - THES A1 - Wagner, Marlene T1 - Effectiveness of Flipped Classroom Instruction in Secondary Education T1 - Wirksamkeit des Flipped-Classroom-Konzepts in der Sekundarstufe N2 - Research on flipped classroom instruction has substantially advanced in the past ten years. Flipped classroom refers to an instructional approach in which students study educational videos at home and do homework assignments in class. Since an increasing number of teachers wants to adopt the flipped classroom approach in their practice, further research—particularly in the context of secondary education—is clearly required. The two presented studies in this thesis aimed at examining the effectiveness of flipped classroom instruction in secondary education by conducting a meta-analytic synthesis of prior studies and an intervention study with a methodologically new approach. Specifically, the studies investigated whether and under which conditions the flipped classroom approach has a positive impact on student achievement and which learners benefit most from a flipped or video-based classroom. In the first study, meta-analytic methods were used to examine whether the flipped classroom approach, after controlling for sampling error, positively effects student achievement in secondary education. Effect sizes were calculated for the research designs pre-test-post-test (Time), post-test only (PostOnly) and pre-test-post-test with control group (Treatment). Moreover, the impact of four moderator variables as boundary conditions of flipped classroom effectiveness was estimated: disciplinary field, length of the intervention, use of a quiz and use of a learning management system. The meta-analytical findings for the effect size Treatment confirmed the effectiveness of flipped classroom on student achievement in comparison to traditional instruction (Cohen’s d = 0.42). Moderator analyses on the effect size Time showed stronger effects for subjects in the STEM area (science, technology, engineering, mathematics) than for foreign languages and humanities. The effect sizes were also higher for shorter intervention studies than for longer ones and if quiz at home had been left out. Moderator analyses on the effect sizes PostOnly and Treatment made clear that the effect sizes for intervention studies without a learning management system were higher than with a learning management system. The second study aimed to compare flipped classroom with other forms of video-based instruction and determine which types of students benefit most from video-based instruction. Thirty-eight school classes with 848 ninth-grade students took part in a quasi-experimental pre-post-test intervention study over the course of four weeks. Two independent variables were completely crossed resulting in four experimental conditions: video (at home vs. in class) and instructional method (student-centred vs. teacher-centred). Multilevel analyses revealed that all four experimental conditions were equally effective in promoting students’ learning gains. At-risk, average and excellent students profited least from video-based instruction. Confident and independent students had the highest learning gains from pre- to post-test. The study constitutes a first step towards a comprehensive evaluation of flipped classroom by using a better-controlled research design and may contribute to a more objective discussion about the positive effects of flipped classroom. N2 - Die Forschung zum Unterrichtskonzept „Flipped Classroom“ hat in den letzten zehn Jahren erhebliche Fortschritte gemacht. Flipped Classroom steht für einen Unterrichtsansatz, bei dem sich Schülerinnen und Schüler zuhause mithilfe von Erklärvideos neue Lerninhalte aneignen und im Unterricht anschließend dazu passende Aufgaben bearbeiten. Da immer mehr Lehrkräfte diesen Unterrichtsansatz in der Praxis anwenden möchten, sind weitere Forschungsarbeiten—insbesondere im Bereich der Sekundarstufe—notwendig. Die beiden in dieser Arbeit vorgestellten Studien zielten darauf ab, die Wirksamkeit des Unterrichtskonzepts „Flipped Classroom“ im Sekundarbereich zu untersuchen, indem eine meta-analytische Synthese früherer Studien und eine Interventionsstudie mit einem methodisch neuen Ansatz durchgeführt wurden. In den Studien wurde insbesondere untersucht, ob und unter welchen Bedingungen sich der Flipped-Classroom-Ansatz positiv auf die Lernleistungen von Schülerinnen und Schüler auswirkt und welche Lernenden am meisten vom videobasierten Unterricht profitieren. In der ersten Studie wurden meta-analytische Methoden verwendet, um zu untersuchen, ob der Flipped-Classroom-Ansatz, nachdem eine Kontrolle des Stichprobenfehlers vorgenommen wurde, die Lernleistung von Schülerinnen und Schülern positiv beeinflusst. Die Effektstärken wurden für die Forschungsdesigns Vor-Test-Nach-Test („Zeit“), „nur Nach-Test“ und Vor-Test-Nach-Test mit Kontrollgruppe („Treatment“) berechnet. Darüber hinaus wurde der Einfluss von vier Moderatorvariablen als Randbedingungen für die Effektivität des Flipped-Classroom-Konzepts geschätzt: disziplinäres Feld, Dauer der Intervention, Verwendung eines Quiz und Verwendung eines Lernmanagementsystems. Die meta-analytischen Ergebnisse für die Effektstärke „Treatment“ bestätigten die Wirksamkeit des Flipped-Classroom-Konzepts auf die Lernleistung der Schülerinnen und Schüler im Vergleich zum traditionellen Unterricht (Cohen‘s d = 0.42). Moderatoranalysen der Effektstärken „Zeit“ zeigten stärkere Effekte für Fächer im MINT-Bereich (Mathematik, Informatik, Naturwissenschaften, Technik) als für fremdsprachliche und geisteswissenschaftliche Fächer. Die Effektstärken waren zudem bei kürzeren Interventionsstudien höher als bei längeren und wenn auf ein Lernquiz zuhause verzichtet wurde. Moderatoranalysen der Effektstärken „Nur Nachtest“ und „Treatment“ haben verdeutlicht, dass die Effektstärken höher ausfallen, wenn kein Lernmanagementsystem verwendet wird. Die zweite Studie zielte darauf ab, das Flipped-Classroom-Konzept mit anderen Formen des videobasierten Unterrichts zu vergleichen und festzustellen, welche Schülertypen am meisten vom videobasierten Unterricht profitieren. 38 Schulklassen mit 848 Schülerinnen und Schülern der neunten Klasse nahmen über einen Zeitraum von vier Wochen an einer quasi-experimentellen Interventionsstudie mit Vor- und Nach-Test teil. Zwei unabhängige Variablen wurden vollständig gekreuzt, was zu vier experimentellen Bedingungen führte: Video (zuhause vs. im Unterricht) und Unterrichtsmethode (schülerzentriert vs. lehrerzentriert). Mehrebenenanalysen ergaben, dass alle vier experimentellen Bedingungen die Lerngewinne der Schülerinnen und Schüler gleichermaßen effektiv förderten. Risikoschülerinnen und Risikoschüler sowie durchschnittliche und ausgezeichnete Schülerinnen und Schüler profitierten am wenigsten vom videobasierten Unterricht. Selbstbewusste und selbstständige Schülerinnen und Schüler hatten die höchsten Lerngewinne von Vor- zu Nach-Test. Die Studie stellt einen ersten Schritt in Richtung einer umfassenderen Bewertung des Flipped-Classroom-Konzepts unter Verwendung eines besser kontrollierten Forschungsdesigns dar und könnte zu einer objektiveren Diskussion über die positiven Auswirkungen des Flipped-Classroom-Konzepts beitragen. KW - flipped classroom KW - effectiveness KW - student achievement KW - secondary education KW - meta-analysis Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8425 ER - TY - THES A1 - Dück, Elena T1 - Identity makes the World go round: Social Constructivist Foreign Policy Analysis N2 - Publication based dissertation on social constructivist approaches to foreign policy analysis and critical security studies. KW - Kanada KW - Tunesien KW - Frankreich KW - USA KW - Securitization KW - Diskursanalyse KW - Discourse Studies KW - Ontological Security KW - Securitization KW - Discourse-bound identity theory KW - Foreign Policy Analysis KW - Außenpolitik Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8522 ER - TY - BOOK A1 - Huebenthal, Sandra T1 - Memory theory in New Testament studies : exploring new perspectives N2 - This book collects ten of Sandra Huebenthal’s most important contributions to the application of Social Memory Theory in Biblical studies. The volume consists of four parts, each devoted to a particular field of research. Part one addresses the general impact of Social Memory Theory for the New Testament. The second part analyzes how Social Memory Theory adds to exploring the phenomenon of (biblical) intertextuality as a strategy for negotiating Early Christian identity and the third part investigates how New Testament pseudepigraphy provides a different approach for understanding the negotiation and formation of Christian identities. Finally, part four provides an outlook how the hermeneutical approach can enhance Patristic research. The ten essays originate from discussions about Social Memory Theory and the New Testament at international conferences, three of them are translations of German contributions, while two are published for the first time in this volume. (Verlagsbeschreibung) KW - social memory KW - intertextuality KW - pseudepigraphy Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14102 SN - 978-3-657-79081-4 PB - Brill Schöningh CY - Paderborn ER - TY - JOUR A1 - Petit, Albin A1 - Cerqueus, Thomas A1 - Boutet, Antoine A1 - Ben Mokhtar, Sonia A1 - Coquil, David A1 - Brunie, Lionel A1 - Kosch, Harald T1 - SimAttack: private web search under fire JF - Journal of Internet Services and Applications N2 - Web Search engines have become an indispensable online service to retrieve content on the Internet. However, using search engines raises serious privacy issues as the latter gather large amounts of data about individuals through their search queries. Two main techniques have been proposed to privately query search engines. A first category of approaches, called unlinkability, aims at disassociating the query and the identity of its requester. A second category of approaches, called indistinguishability, aims at hiding user’s queries or user’s interests by either obfuscating user’s queries, or forging new fake queries. This paper presents a study of the level of protection offered by three popular solutions: Tor-based, TrackMeNot, and GooPIR. For this purpose, we present an efficient and scalable attack – SimAttack – leveraging a similarity metric to capture the distance between preliminary information about the users (i.e., history of query) and a new query. SimAttack de-anonymizes up to 36.7 % of queries protected by an unlinkability solution (i.e., Tor-based), and identifies up to 45.3 and 51.6 % of queries protected by indistinguishability solutions (i.e., TrackMeNot and GooPIR, respectively). In addition, SimAttack de-anonymizes 6.7 % more queries than state-of-the-art attacks and dramatically improves the performance of the attack on TrackMeNot by 23.6 %, while retaining an execution time faster by two orders of magnitude. KW - Privacy KW - Web search KW - Unlinkability KW - Indistinguishability Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3574 SN - 1869-0238 PB - SpringerOpen ER - TY - BOOK A1 - Wehner, Stefanie ED - Wehner, Stefanie ED - Kurfürst, Sandra T1 - Southeast Asian Transformations N2 - Southeast Asia is one of the most dynamic regions in the world. This volume offers a timely approach to Southeast Asian Studies, covering recent transitions in the realms of urbanism, rural development, politics, and media. While most of the contributions deal with the era of post-independence, some tackle the colonial period and the resulting developments. The volume also includes insights from Southern India. As a tribute to the interdisciplinary project of Southeast Asian Studies, this book brings together authors from disciplines as diverse as area studies, sociology, history, geography, and journalism. KW - Südostasien KW - sozialer Wandel Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10406 SN - 978-3-8394-5171-7 PB - Transcript CY - Bielefeld ER - TY - JOUR A1 - Sallen, Jeffrey A1 - Hirschmann, Florian A1 - Herrmann, Christian T1 - Evaluation and Adaption of the Trier Inventory for Chronic Stress (TICS) for Assessment in Competitive Sports JF - Frontiers in Psychology N2 - The demands of a career in competitive sports can lead to chronic stress perception among athletes if there is a non-conformity of requirements and available coping resources. The Trier Inventory for Chronic Stress (TICS) (Schulz et al., 2004) is said to be thoroughly validated. Nevertheless, it has not yet been subjected to a confirmatory factor analysis. The present study aims (1) to evaluate the factorial validity of the TICS within the context of competitive sports and (2) to adapt a short version (TICS-36). The total sample consisted of 564 athletes (age in years: M = 19.1, SD = 3.70). The factor structure of the original TICS did not adequately fit the present data, whereas the short version presented a satisfactory fit. The results indicate that the TICS-36 is an economical instrument for gathering interpretable information about chronic stress. For assessment in competitive sports with TICS-36, we generated overall and gender-specific norm values. KW - Chronic stressors KW - Mental health KW - Athletes KW - Stress measurement KW - Olympic sports KW - Factor analysis KW - Measurement invariance Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5281 SN - 1664-1078 VL - 9 ER - TY - JOUR A1 - Kronawitter, Stefan A1 - Lengauer, Christian T1 - Polyhedral Search Space Exploration in the ExaStencils Code Generator JF - ACM Transactions on Architecture and Code Optimization N2 - Performance optimization of stencil codes requires data locality improvements. The polyhedron model for loop transformation is well suited for such optimizations with established techniques, such as the PLuTo algorithm and diamond tiling. However, in the domain of our project ExaStencils, stencil codes, it fails to yield optimal results. As an alternative, we propose a new, optimized, multi-dimensional polyhedral search space exploration and demonstrate its effectiveness: we obtain better results than existing approaches in several cases. We also propose how to specialize the search for the domain of stencil codes, which dramatically reduces the exploration effort without significantly impairing performance. KW - Software performance KW - Source code generation KW - Discrete space search Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5778 SN - 1544-3973 VL - 15 IS - 4 ER - TY - JOUR A1 - Basmadjian, Robert T1 - Flexibility-Based Energy and Demand Management in Data Centers BT - a Case Study for Cloud Computing JF - Energies N2 - The power demand (kW) and energy consumption (kWh) of data centers were augmenteddrastically due to the increased communication and computation needs of IT services. Leveragingdemand and energy management within data centers is a necessity. Thanks to the automated ICTinfrastructure empowered by the IoT technology, such types of management are becoming more feasiblethan ever. In this paper, we look at management from two different perspectives: (1) minimization of theoverall energy consumption and (2) reduction of peak power demand during demand-response periods.Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustivelyreviewed the potential mechanisms in data centers that provided flexibilities together with flexiblecontracts such as green service level and supply-demand agreements. We extended state-of-the-artby introducing the methodological building blocks and foundations of management systems for theabove mentioned two perspectives. We validated our results by conducting experiments on a lab-gradescale cloud computing data center at the premises of HPE in Milano. The obtained results support thetheoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT:33% of overall energy savings and 50% of power demand reduction during demand-response periods inthe case of data center federation. Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9251 VL - 2019 IS - 12 SP - 1 EP - 22 PB - MDPI CY - Basel ER - TY - JOUR A1 - Fickert, Thomas T1 - Better Resilient than Resistant—Regeneration Dynamics of Storm-Disturbed Mangrove Forests on the Bay Island of Guanaja (Honduras) during the First Two Decades after Hurricane Mitch (October 1998) JF - Diversity N2 - Located at the interface of land and sea, Caribbean mangroves frequently experience severe disturbances by hurricanes, but in most cases storm-impacted mangrove forests are able to regenerate. How exactly regeneration proceeds, however, is still a matter of debate: does—due to the specific site conditions—regeneration follows a true auto-succession with exactly the same set of species driving regeneration that was present prior to the disturbance, or do different trajectories of regeneration exist? Considering the fundamental ecosystem services mangroves provide, a better understanding of their recovery is crucial. The Honduran island of Guanaja offers ideal settings for the study of regeneration dynamics of storm-impacted mangrove forests. The island was hit in October 1998 by Hurricane Mitch, one of the most intense Atlantic storms of the past century. Immediately after the storm, 97% of the mangroves were classified as dead. In 2005, long-term monitoring on the regeneration dynamics of the mangroves of the island was initiated, employing permanent line-transects at six different mangrove localities all around the island, which have been revisited in 2009 and 1016. Due to the pronounced topography of the island, different successional pathways emerge depending on the severity of the previous disturbance. KW - Guanaja KW - Mangrove regeneration KW - Hurricane disturbance KW - Successional pathways Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5247 SN - 1424-2818 VL - 10 IS - 1 PB - MDPI AG CY - Basel ER - TY - JOUR A1 - Graf Lambsdorff, Johann A1 - Giamattei, Marcus A1 - Werner, Katharina A1 - Schubert, Manuel T1 - Team reasoning - Experimental evidence on cooperation from centipede games JF - PloS ONE N2 - Previous laboratory studies on the centipede game have found that subjects exhibit surprisingly high levels of cooperation. Across disciplines, it has recently been highlighted that these high levels of cooperation might be explained by “team reasoning”, the willingness to think as a team rather than as an individual. We run an experiment with a standard centipede game as a baseline. In two treatments, we seek to induce team reasoning by making a joint goal salient. First, we implement a probabilistic variant of the centipede game that makes it easy to identify a joint goal. Second, we frame the game as a situation where a team of two soccer players attempts to score a goal. This frame increases the salience even more. Compared to the baseline, our treatments induce higher levels of cooperation. In a second experiment, we obtain similar evidence in a more natural environment–a beer garden during the 2014 FIFA Soccer World Cup. Our study contributes to understanding how a salient goal can support cooperation. KW - Sports KW - Games KW - Team reasoning KW - Game theory KW - Altruistic behavior KW - Spieltheorie KW - Kooperatives Verhalten KW - Centipede game Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5913 SN - 1932-6203 VL - 13 IS - 11 PB - PLOS CY - San Francisco ER - TY - JOUR ED - Rowedder, Simon ED - Wilcox, Phill ED - Brandtstädter, Susanne T1 - Negotiating Chinese infrastructures of modern mobilities : insights from Southeast Asia N2 - From transportation to urbanization, energy and digitalization, China-backed projects of infrastructural development are increasingly common throughout Southeast Asia and the global South as both a means and outcome of development. This trend has accelerated since China’s Belt and Road Initiative (BRI) in 2013. Against this backdrop, the present ASEAS issue invites to rethink the roles infrastructure plays in forms of development that place connectivity at the center. Contents: Simon Rowedder, Phill Wilcox & Susanne Brandtstädter Negotiating Chinese Infrastructures of Modern Mobilities: Insights from Southeast Asia Current Research on Southeast Asia Panitda Saiyarod The Deviated Route: Navigating the Logistical Power Landscape of the Mekong Border Trade Franziska S. Nicolaisen The Politicization of Mobility Infrastructures in Vietnam — The Hanoi Metro Project at the Nexus of Urban Development, Fragmented Mobilities, and National Security Arratee Ayuttacorn Chinese Investor Networks and the Politics of Infrastructure Projects in the Eastern Economic Corridor in Thailand Karin Dean Belt and Road Initiative in Northern Myanmar: The Local World of China’s Global Investments Mira Käkönen Entangled Enclaves: Dams, Volatile Rivers, and Chinese Infrastructural Engagement in Cambodia Research Workshop Tim Oakes Infrastructure Power, Circulation and Suspension Susanne Brandtstädter Infrastructural Fragility, Infra-Politics and Jianghu Book Reviews Michael Kleinod-Freudenberg Book Review: Tappe, O., & Rowedder, S. (Eds.). (2022). Extracting Development: Contested Resource Frontiers in Mainland Southeast Asia Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14049 UR - https://aseas.univie.ac.at/index.php/aseas/issue/view/657 IS - 16(2) 2023 SP - 175 EP - 314 PB - SEAS - Society fo South-East Asian Studies CY - Wien ER - TY - THES A1 - Le, Mirjam T1 - The Peri-urban City: Cargo Cult Urbanism, Porosity and an Emerging Urban Citizenship in Vietnamese Secondary Cities N2 - The impact of urban development on local residents in the urban periphery oscillates between processes of empowerment and processes of marginalization as well as between state-controlled order and decentralized self-organization. Consequently local authorities, including urban planners and local residents need to negotiate their respective roles in the production and usage of urban space defined by transition, transformation and ambiguity. Therefore, this work examines the underlying patterns, interactions and structures which define the negotiation of state-society relations in the framework of a porous peri-urban landscape in Vietnam’s secondary cities. In Chapter 1, peri-urban areas are defined as space of transformation where pattern of rural land use are intertwined with pattern of urban land use to create an urban-rural interface with blurred boundaries. The main characteristic of the spatial pattern in Tam Kỳ and Buôn Ma Thuột is rooted urban porosity leading to the spatial intertwining of rural, peri-urban and urban spaces. The emerging urban landscape can be defined as peri-urban city. Three main themes define this peri-urban city beyond porosity: (1) processes of transformation creating the peri-urban city, (2) networks of power and control as means to adapt to these transformations and (3) mobility as prerequisite for ability to benefit from the changing landscape in peri-urban cities. Chapter 2 describes how state authorities, urban planners and private investors at the local level in Tam Kỳ and Buôn Ma Thuột use porous space to reproduce the city based on their aspirations. This leads to a cargo cult urbanism, where urban planning is rooted in future aspirations for the city. Plans and construction efforts reference the image of a modern urban future and produce the image of a city, which does not exists in the real urban space of Tam Kỳ and Buôn Ma Thuột. In the meantime, practices based in peri-urban space are often contradictory to the official aspirations of local state authorities Chapter 3 explores this emerging divergence between the aspired urban space by the state and the reality of urban space rooted in material and social porosity. Traditional, newly accessible and environmental porosity provide an array of accessible space which transforms the peri-urban city into a social arena of encounters and interaction. Consequently, porosity enables local residents to maintain urban space as commons. This counters the push towards the privatization of urban space by state and private actors and creates multiple aspirations for the urban future. Rooted in urban space as commons, porosity enables the usage of peri-urban space as spaces of resistance as discussed in Chapter 4. Mobility and interaction are means of reproduction in the porous ambiguity of urban materiality but also become means of everyday resistance. The emerging spatialities of emancipation provide opportunities for an emerging urban citizenship. Urban materiality and urban citizenship have a mutually constitutive relationship. KW - peri-urbanization KW - Vietnam KW - urban citizenship KW - urban porosity KW - cargo-cult urbanism Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14097 ER - TY - THES A1 - Bermeitinger, Bernhard T1 - Investigating a Second-Order Optimization Strategy for Neural Networks N2 - In summary, this cumulative dissertation investigates the application of the conjugate gradient method CG for the optimization of artificial neural networks (NNs) and compares this method with common first-order optimization methods, especially the stochastic gradient descent (SGD). The presented research results show that CG can effectively optimize both small and very large networks. However, the default machine precision of 32 bits can lead to problems. The best results are only achieved in 64-bits computations. The research also emphasizes the importance of the initialization of the NNs’ trainable parameters and shows that an initialization using singular value decomposition (SVD) leads to drastically lower error values. Surprisingly, shallow but wide NNs, both in Transformer and CNN architectures, often perform better than their deeper counterparts. Overall, the research results recommend a re-evaluation of the previous preference for extremely deep NNs and emphasize the potential of CG as an optimization method. N2 - Zusammenfassend untersucht die vorliegende kumulative Dissertation die Anwendung des konjugierten Gradienten (CG) zur Optimierung künstlicher neuronaler Netzwerke (NNs) und vergleicht diese Methode mit verbreiteten Optimierungsverfahren erster Ordnung, insbesondere dem Stochastischem Gradientenabstieg (SGD). Die in den Arbeiten präsentierten Forschungsergebnisse zeigen, dass CG in der Lage ist, sowohl kleinere als auch sehr große Netzwerke effektiv zu optimieren. Allerdings kann die Maschinen- genauigkeit bei 32-Bit-Berechnungen zu Problemen führen, beste Ergebnisse werden erst in 64-Bit-Fließkommazahlen erreicht. Die Forschung betont auch die Bedeutung der Initialisierung der NN-Parameter und zeigt, dass eine Initialisierung mittels Singulärwertzerlegung zu deutlich geringeren Fehlerwerten führt. Überraschenderweise erzielen flachere NNs bessere Ergebnisse als tiefe NNs mit einer vergleichbaren Anzahl an trainierbaren Parametern, unabhängig vom jeweiligen NN, das die künstlichen Daten erzeugt. Es zeigt sich auch, dass flache, breite NNs, sowohl in Transformer-, als auch in CNN-Architekturen oft besser abschneiden als ihre tieferen Gegenstücke. Insgesamt empfehlen die Forschungsergebnisse eine Neubewertung der bisherigen Präferenz für extrem tiefe NNs und betonen das Potential von CG als Optimierungsmethode. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14087 ER - TY - JOUR A1 - Erdogan, Gülsah A1 - Fekih Hassen, Wiem T1 - Charging scheduling of hybrid energy storage systems for EV charging stations JF - Energies N2 - The growing demand for electric vehicles (EV) in the last decade and the most recent European Commission regulation to only allow EV on the road from 2035 involved the necessity to design a cost-effective and sustainable EV charging station (CS). A crucial challenge for charging stations arises from matching fluctuating power supplies and meeting peak load demand. The overall objective of this paper is to optimize the charging scheduling of a hybrid energy storage system (HESS) for EV charging stations while maximizing PV power usage and reducing grid energy costs. This goal is achieved by forecasting the PV power and the load demand using different deep learning (DL) algorithms such as the recurrent neural network (RNN) and long short-term memory (LSTM). Then, the predicted data are adopted to design a scheduling algorithm that determines the optimal charging time slots for the HESS. The findings demonstrate the efficiency of the proposed approach, showcasing a root-mean-square error (RMSE) of 5.78% for real-time PV power forecasting and 9.70% for real-time load demand forecasting. Moreover, the proposed scheduling algorithm reduces the total grid energy cost by 12.13%. KW - scheduling optimization KW - HESS KW - PV power KW - load demand KW - RNN KW - LSTM KW - GRU KW - cost reduction Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14128 VL - 2023 IS - 16 PB - MDPI CY - Basel ER - TY - JOUR A1 - Narh, John A1 - Wehner, Stefanie A1 - Ungruhe, Christian A1 - Eberth, Andreas T1 - The role of translocal practices in a natural climate solution in Ghana JF - Climate N2 - People-centred reforestation is one of the ways to achieve natural climate solutions. Ghana has established a people-centred reforestation programme known as the Modified Taunya System (MTS) where local people are assigned degraded forest reserves to practice agroforestry. Given that the MTS is a people-centred initiative, socioeconomic factors are likely to have impact on the reforestation drive. This study aims to understand the role of translocal practices of remittances and visits by migrants on the MTS. Using multi-sited, sequential explanatory mixed methods and the lens of socioecological systems, the study shows that social capital and socioeconomic obligations of cash remittances from, as well as visits by migrants to their communities of origin play positive roles on reforestation under the MTS. Specifically, translocal households have access to, and use remittances to engage relatively better in the MTS than households that do not receive remittances. This shows that translocal practices can have a positive impact on the environment at the area of origin of migrants where there are people-centred environmental policies in place. KW - agroforestry KW - multi-sited research KW - reforestation KW - remittances KW - sequential explanatory mixed methods KW - socioecological systems Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14138 VL - 2023 IS - 11 PB - MDPI CY - Basel ER - TY - THES A1 - Klaus, Tina T1 - Complexity Analysis of Quantizations of Multidimensional Stochastic Differential Equations N2 - The dissertation is located in the field of quantizations of certain stochastic processes, namely a solution X of a multidimensional stochastic differential equation (SDE). The quantization problem for X consists in approximating X by a a random element which takes only finitely many values. Our main interest lies in the investigation of the asymptotic behavior of the Nth minimal quantization error of X as N tends to infinity, which incorporates the determination of both the sharp rate of convergence and explicit asymptotic constants. Especially explicit asymptotic constants have been so far unknown in the context of multidimensional SDEs. Furthermore, as part of our analysis, we provide a method which yields a strongly asymptotically optimal sequence of N-quantization of X. In certain special cases our method is fully constructive and the algorithm is easy to implement. KW - Stochastische Differentialgleichung KW - Komplexität KW - Quantifizierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7665 ER - TY - THES A1 - Awwad, Tarek T1 - Context-Aware Worker Selection For Efficient Quality Control In Crowdsourcing N2 - In the last decade, crowdsourcing has proved its ability to address large scale data collection tasks, such as labeling large data sets, at a low cost and in a short time. However, the performance and behavior variability between workers as well as the variability in task designs and contents, induce an unevenness in the quality of the produced contributions and, thus, in the final output quality. In order to maintain the effectiveness of crowdsourcing, it is crucial to control the quality of the contributions. Furthermore, maintaining the efficiency of crowdsourcing requires the time and cost overhead related to the quality control to be at its lowest. While effective, current quality control techniques such as contribution aggregation, worker selection, context-specific reputation systems, and multi-step workflows, suffer from fairly high time and budget overheads and from their dependency on prior knowledge about individual workers. In this thesis, we address this challenge by leveraging the similarity between completed and incoming tasks as well as the correlation between the worker declarative profiles and their performance in previous tasks in order to perform an efficient task-aware worker selection. To this end, we propose CAWS (Context AwareWorker Selection) method which operates in two phases; in an offline phase, completed tasks are clustered into homogeneous groups for each of which the correlation with the workers declarative profile is learned. Then, in the online phase, incoming tasks are matched to one of the existing clusters and the correspondent, previously inferred profile model is used to select the most reliable online workers for the given task. Using declarative profiles helps eliminate any probing process, which reduces the time and the budget while maintaining the crowdsourcing quality. Furthermore, the set of completed tasks, when compared to a probing task split, provides a larger corpus from which a more precise profile model can be learned. This translates to a better selection quality, especially for harder tasks. In order to evaluate CAWS, we introduce CrowdED (Crowdsourcing Evaluation Dataset), a rich dataset to evaluate quality control methods and quality-driven task vectorization and clustering. The generation of CrowdED relies on a constrained sampling approach that allows to produce a task corpus which respects both, the budget and type constraints. Beside helping in evaluating CAWS, and through its generality and richness, CrowdED helps in plugging the benchmarking gap present in the crowdsourcing quality control community. Using CrowdED, we evaluate the performance of CAWS in terms of the quality of the worker selection and in terms of the achieved time and budget reduction. Results shows the following: first, automatic grouping is able to achieve a learning quality similar to job-based grouping. And second, CAWS is able to outperform the state-of-the-art profile-based worker selection when it comes to quality. This is especially true when strong budget and time constraints are present on the requester side. Finally, we complement our work by a software contribution consisting of an open source framework called CREX (CReate Enrich eXtend). CREX allows the creation, the extension and the enrichment of crowdsourcing datasets. It provides the tools to vectorize, cluster and sample a task corpus to produce constrained task sets and to automatically generate custom crowdsourcing campaign sites. N2 - Im letzten Jahrzehnt hat Crowdsourcing seine Fähigkeit bewiesen große Datensammelaufgaben, wie die Beschriftung großer Datensätze, zu geringen Kosten und in kurzer Zeit zu bewältigen. Die Leistungs- und Verhaltensschwankungen zwischen den Arbeitern sowie die Variabilität in den Aufgabenentwürfen und -inhalten führen jedoch zu einer Ungleichmäßigkeit in der Qualität der erworbenen Beiträge und somit in der endgültigen Ausgabequalität. Um die Effektivität von Crowdsourcing zu erhalten, ist es entscheidend die Qualität der einzelnen Beiträge zu kontrollieren. Darüber hinaus erfordert die Aufrechterhaltung der Effizienz von Crowdsourcing, dass der Zeit- und Kostenaufwand für die Qualitätskontrolle am geringsten ist. Effektive, aktuelle Qualitätskontrolltechniken wie die Aggregation von Beiträgen, die gezielte Auswahl von Arbeitern, kontextspezifische Reputationssysteme und mehrstufige Workflows leiden unter ziemlich hohen Zeit- und Budgetzwangslagen und von ihrer Abhängigkeit von vorausgehenden Kenntnissen über die einzelnen Arbeiter. Ìn dieser Arbeit gehen wir diese Herausforderungen an, indem wir die Ähnlichkeit zwischen abgeschlossenen und eingehenden Aufgaben sowie die Korrelation zwischen den von Arbeitern deklarierten Profilen und deren Leistung in früheren Aufgaben nutzen, um eine effiziente aufgabenbewusste Arbeiterauswahl durchzuführen. Zu diesem Zweck schlagen wir eine zweiphasige Methode vor: CAWS (Context Aware Worker Selection). In einer Offline-Phase werden bereits bearbeitete Aufgaben in homogene Cluster gruppiert, für welche jeweils die Korrelation mit dem vorab deklarierten Profil der Arbeiter erlernt wird. In der Online-Phase werden eingehende Aufgaben dann einem der vorhandenen Cluster zugeordnet, und das entsprechende, zuvor erschlossene Profilmodell wird dazu verwendet, um die vertrauenswürdigsten Online-Mitarbeiter für die gegebene Aufgabe auszuwählen. Die Verwendung von deklarativen Profilen hilft dabei jeglichen Sondierungsprozess zu eliminieren, wobei Zeit und Kosten reduziert werden und gleichzeitig die Crowdsourcing-Qualität beibehalten wird. Darüber hinaus bietet das Aggregat der abgeschlossenen Aufgaben im Vergleich zu einer Aufgabenaufteilung durch Sondierung einen größeren Korpus, aus dem ein präziseres Profilmodell erlernt werden kann. Dies führt zu einer besseren Auswahlqualität, insbesondere für schwierigere Aufgaben. Um CAWS zu evaluieren, stellen wir CrowdED (Crowdsourcing Evaluation Dataset) vor, einen umfassenden Datensatz zur Evaluierung von Qualitätskontrollmethoden und qualitätsgetriebener Aufgaben-Vektorisierung und Clusterbildung. Die Generierung von CrowdED basiert auf einem bedingten Stichprobeverfahren, welches es ermöglicht, einen Aufgaben-Corpus zu erstellen, der sowohl die Budget- als auch die Typ-Bedingungen einhält. Neben seiner Allgemeingültigkeit und Reichhaltigkeit, hilft CrowdED nicht nur bei der Bewertung von CAWS, sondern es hilft auch dabei, die Benchmarking-Lücke in der Crowdsourcing-Community für Qualitätskontrolle zu schließen. Mit CrowdED evaluieren wir die Leistung von CAWS im Hinblick auf die Qualität der Arbeiterauswahl und auf die erreichte Zeit- und Kostenreduzierung. Die Ergebnisse zeigen folgendes: Zum einen kann mit der automatischen Gruppierung eine Lernqualität ähnlich der von Job-basierten Gruppierungen erreicht werden. Und zweitens ist CAWS in der Lage, die aktuellen profilbasierten Auswahlmethoden in Bezug auf Qualität zu übertreffen. Dies gilt insbesondere dann, wenn auf der Anfordererseite starke Budget- und Zeitbeschränkungen bestehen. Schließlich ergänzen wir unsere Arbeit mit einer Software, die aus einem lizenzfreien Framework namens CREX (CReate Enrich eXtend) besteht. CREX ermöglicht die Erstellung, Erweiterung und Anreicherung von Crowdsourcing-Datensätzen. Es liefert die nötigen Werkzeuge um einen Aufgabenkorpus zu vektorisieren, zu gruppieren und zu samplen, um eingeschränkte Aufgabensätze zu erzeugen und um automatisch benutzerdefinierte Crowdsourcing-Kampagnen-Seiten zu generieren. KW - Crowdsourcing KW - Quality control KW - Machine learning KW - Qualitätssicherung KW - Open Innovation Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7409 ER - TY - THES A1 - Wahl, Florian T1 - Methods for monitoring the human circadian rhythm in free-living N2 - Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle. KW - context recognition KW - human circadian rhythm KW - machine learning KW - sleep timing KW - smart eyeglasses KW - Tagesrhythmus Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7607 ER - TY - THES A1 - Kronawitter, Stefan T1 - Automatic Performance Optimization of Stencil Codes N2 - A widely used class of codes are stencil codes. Their general structure is very simple: data points in a large grid are repeatedly recomputed from neighboring values. This predefined neighborhood is the so-called stencil. Despite their very simple structure, stencil codes are hard to optimize since only few computations are performed while a comparatively large number of values have to be accessed, i.e., stencil codes usually have a very low computational intensity. Moreover, the set of optimizations and their parameters also depend on the hardware on which the code is executed. To cut a long story short, current production compilers are not able to fully optimize this class of codes and optimizing each application by hand is not practical. As a remedy, we propose a set of optimizations and describe how they can be applied automatically by a code generator for the domain of stencil codes. A combination of a space and time tiling is able to increase the data locality, which significantly reduces the memory-bandwidth requirements: a standard three-dimensional 7-point Jacobi stencil can be accelerated by a factor of 3. This optimization can target basically any stencil code, while others are more specialized. E.g., support for arbitrary linear data layout transformations is especially beneficial for colored kernels, such as a Red-Black Gauss-Seidel smoother. On the one hand, an optimized data layout for such kernels reduces the bandwidth requirements while, on the other hand, it simplifies an explicit vectorization. Other noticeable optimizations described in detail are redundancy elimination techniques to eliminate common subexpressions both in a sequence of statements and across loop boundaries, arithmetic simplifications and normalizations, and the vectorization mentioned previously. In combination, these optimizations are able to increase the performance not only of the model problem given by Poisson’s equation, but also of real-world applications: an optical flow simulation and the simulation of a non-isothermal and non-Newtonian fluid flow. KW - Optimierung KW - Codegenerierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7618 ER - TY - THES A1 - Bidler, Margarita T1 - Consumers' Privacy-Related Decision-Making in the Digital Landscape N2 - Nowadays, consumers are often required to disclose private data in various contexts such as while surfing the internet, downloading a mobile application, or engaging in a business relationship with a firm. Privacy-related decision-making research has so far mainly investigated data disclosure as a cognitive risk-benefit trade-off analysis. While this cognitive approach might be appropriate for situations where consumers have the opportunity for cognitive evaluations, there are many situations in the modern landscape where consumers cannot or do not want to engage in cognitive processing. Decision-making under stress or data disclosure to a business network of collaborating firms, for example, constitute challenges to purely cognitive decision-making approaches, calling for an extension of the established paradigm of cognitive privacy-related decision making. This dissertation advocates for the crucial role of affective processing in many modern data disclosure situations, where consumers do not engage in purely cognitive processing due to external hindrances or a lack of personal involvement in the data disclosure situation. KW - privacy-related decision-making Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7996 ER - TY - THES A1 - Stoffer, Torben T1 - Service Provisions and Business Relationships in the Digital Era – Four Essays in the B2B Context N2 - Digitalization has fundamentally changed how services are provided and how service providers and their customers interact with each other in the business-to-business (B2B) context. Against the backdrop of these developments, this thesis considers – in four essays – the changes brought about by both service and sales digitalization. Each essay investigates for one research topic the aspects of existing knowledge regarding non-digital services and/or sales which can be transferred to digital services and sales and which aspects must be adjusted. The aim is to support B2B firms that offer or receive services or plan to do so in the future which cope with the challenges of service and sales digitalization. In doing so, the first and second essay investigate the contingency effect of service digitalization on service characteristics from the provider and customer views, respectively. Both essays aim at explaining service value as an endogenous variable. In the first essay, service modularity and service flexibility are considered as predecessors that help explain service value. The second essay investigates the effect of customer cocreation on service value. Whereas the first and second essay focus on service characteristics, the third and fourth essay focus on business relationships. Both essays explain relational conflict, one important facet of business relationships, as the endogenous variable and consider the perspectives of both providers and customers. The third essay elaborates on the diverging effect of service digitalization on relational conflict from these two perspectives. The fourth essay incorporates both service and sales digitalization and investigates the contingency effects of the two forms of digitalization on the relationship between coercive power use and relational conflict. In conclusion, this thesis provides a more fine-grained view on the construct digitalization by differentiating explicitly between service digitalization and sales digitalization and introduces a new conceptualization of service (see all four essays) and sales digitalization (see the fourth essay) by treating digitalization as a continuum. Furthermore, this thesis investigates the opportunities and challenges brought about by digitalization. In particular, the first and second essays show the opportunities service digitalization creates for providers, who could benefit from service modularity, and customers, who could benefit from the integration of their own resources into service provisions. In addition, the third essay shows that for providers service digitalization has a positive and for customers contrarily a negative effect on relation conflict. The fourth essay shows that sales and service digitalization positively moderate the effect of coercive power use on relational conflict for weaker parties in business relationships (except for weaker providers) but not for stronger parties. In sum, this thesis contributes to a better understanding of the consequences of service and sales digitalization and provides recommendations for companies facing challenges and decisions related to this development. Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7558 ER - TY - THES A1 - Luzsa, Robert T1 - A Psychological and Empirical Investigation of the Online Echo Chamber Phenomenon N2 - In the public debate it is often assumed that communication in so-called “Echo Chambers” - online structures in which like-minded people share mostly messages that confirm their mutual, shared attitudes - can lead to negative outcomes such as increased societal polarization between groups holding opposing beliefs. This thesis aimed to examine this assumption from a psychological perspective and substantiate it empirically. First, based on existing research and psychological theories, a working definition of Echo Chambers was formulated, that highlights two key factors: Selective Exposure to attitudinally congruent messages and communication in homogeneous networks. Then, three studies were conducted to test links between these factors and two individual-level outcomes that are associated to subjects’ actual behavior: Their False Consensus, that is, how strongly subjects perceive the public in agreement with their own attitudes, and their Intergroup Bias, which reflects to which degree subjects’ identify as members of an in-group that is in conflict with negatively perceived out-groups. The studies employed questionnaire-based, experimental, as well as real-word data driven approaches. Overall, they confirm that exposure to Echo Chamber-like online structures can indeed lead to a more favorably distorted perception of public opinions and to more signs of Intergroup Bias in subjects’ communicational style. Thus, the thesis provides first psychologically founded empirical evidence for effects of online Echo Chamber exposure on behavior-related individual-level outcomes. The results can serve as a basis for further research as well as for the discussion of possible strategies to counter negative effects of online Echo Chambers. KW - False Consensus KW - Echo Chamber KW - Psychology KW - Falscher Konsensus-Effekt KW - Psychologie Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7419 ER - TY - CHAP A1 - Berger, Christian A1 - Reiser, Hans P. A1 - Sousa, João A1 - Bessani, Alysson T1 - Resilient Wide-Area Byzantine Consensus Using Adaptive Weighted Replication T2 - 38th IEEE International Symposium on Reliable Distributed Systems (SRDS 2019) N2 - In geo-replicated systems, the heterogeneous latencies of connections between replicas limit the system’s ability to achieve fast consensus. State machine replication (SMR) protocols can be refined for their deployment in wide-area networks by using a weighting scheme for active replication that employs additional replicas and assigns higher voting power to faster replicas. Utilizing more variability in quorum formation allows replicas to swifter proceed to subsequent protocol stages, thus decreasing consensus latency. However, if network conditions vary during the system’s lifespan or faults occur, the system needs a solution to autonomously adjust to new conditions. We incorporate the idea of self-optimization into geographically distributed, weighted replication by introducing AWARE, an automated and dynamic voting weight tuning and leader positioning scheme. AWARE measures replica-replica latencies and uses a prediction model, thriving to minimize the system’s consensus latency. In experiments using different Amazon EC2 regions, AWARE dynamically optimizes consensus latency by self-reliantly finding a fast weight configuration yielding latency gains observed by clients located across the globe. KW - adaptivness, weighted replication, consensus, geo-replication, Byzantine fault tolerance, self-optimization Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7537 PB - IEEE Xplore ER - TY - CHAP A1 - Berger, Christian A1 - Reiser, Hans P. T1 - Scaling Byzantine Consensus: A Broad Analysis T2 - SERIAL'18 Proceedings of the 2nd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers N2 - Blockchains and distributed ledger technology (DLT) that rely on Proof-of-Work (PoW) typically show limited performance. Several recent approaches incorporate Byzantine fault-tolerant (BFT) consensus protocols in their DLT design as Byzantine consensus allows for increased performance and energy efficiency, as well as it offers proven liveness and safety properties. While there has been a broad variety of research on BFT consensus protocols over the last decades, those protocols were originally not intended to scale for a large number of nodes. Thus, the quest for scalable BFT consensus was initiated with the emerging research interest in DLT. In this paper, we first provide a broad analysis of various optimization techniques and approaches used in recent protocols to scale Byzantine consensus for large environments such as BFT blockchain infrastructures. We then present an overview of both efforts and assumptions made by existing protocols and compare their solutions. KW - Distributed Ledgers KW - Blockchain KW - Byzantine Fault-Tolerant Consensus Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7526 SN - 978-1-4503-6110-1 PB - ACM CY - New York, NY, USA ER - TY - THES A1 - Lachat, Paul T1 - Detecting Inference Attacks Involving Sensor Data N2 - The collection of personal information by organizations has become increasingly essential for social interactions. Nevertheless, according to the GDPR (General Data Protection Regulation), the organizations have to protect collected data. Access Control (AC) mechanisms are traditionally used to secure information systems against unauthorized access to sensitive data. The increased availability of personal sensor data, thanks to IoT-oriented applications, motivates new services to offer insights about individuals. Consequently, data mining algorithms have been proposed to infer personal insights from collected sensor data. Although they can be used for genuine purposes, attackers can leverage those outcomes, combining them with other type of data, and further breaching individuals’ privacy. Thus, bypassing AC mechanisms thanks to such insights is a concrete problem. We propose an inference detection system based on the analysis of queries issued on a sensor database. The knowledge obtained through these queries, and the inference channels corresponding to the use of data mining algorithms on sensor data to infer individual information, are described using Raw sensor data based Inference ChannEl Model (RICE-M). The detection is carried out by RICE-M based inference detection System (RICE-Sy). RICE-Sy considers at the time of the query, the knowledge that a user obtains via a new query and has obtained via his query history, and determines whether this is sufficient to allow that user to operate a channel. Thus, privacy protection systems can take advantage of the inferences detected by RICE-Sy, taking into account individuals’ information obtained by the attackers via a database of sensors, to further protect these individuals. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14149 ER - TY - THES A1 - Pöhls, Henrich C. T1 - Increasing the Legal Probative Value of Cryptographically Private Malleable Signatures N2 - Die Arbeit befasst sich mit der Erarbeitung von technischen Vorgaben und deren Umsetzung in kryptographisch sichere Verfahren von datenschutzfreundlichen, veränderbaren digitalen Signaturverfahren (private malleable signature schemes oder MSS) zur Erlangung möglichst hoher rechtlicher Evidenz. Im Recht werden bestimmte kryptographische Algorithmen, Schlüssellängen und deren korrekte organisatorische Anwendungen zur Erzeugung elektronisch signierter Dokumente als rechtssicher eingestuft. Dies kann zu einer Beweiserleichterung mithilfe signierter Dokumente führen. So gelten nach Verordnung (EU) Nr. 910/2014 (eIDAS) qualifiziert signierte elektronische Dokumente entweder als Anscheinsbeweis der Echtheit oder ihnen wird gar eine gesetzliche Vermutung der Echtheit zuteil. Gesetzlich anerkannte technische Verfahren, die einen solch erhöhten Beweiswert erreichen, erfüllen mithilfe von Kryptographie im wesentlichen zwei Eigenschaften: Integritätsschutz (integrity), also die Erkennung der Abwesenheit von unerwünschten Änderungen und die Zurechenbarkeit des unveränderten Dokumentes zum Signaturersteller (accountability). Hingegen ist der größte Vorteil veränderbarer digitaler Signaturverfahren (MSS) die „privacy“ genannte Eigenschaft: Eine autorisierte Änderung verbirgt den vorherigen Inhalt. Des Weiteren bleibt die Signatur solange valide wie ausschliesslich autorisierte Änderungen vorgenommen werden. Wird diese Eigenschaft kryptographisch nachweislich sicher erfüllt, so spricht man von einem private malleable signature scheme. In der Arbeit werden zwei verbreitete Formen, die sogenannten redactable signature schemes (RSS) und die sanitizable signature schemes (SSS), eingehend betrachtet. Diese erlauben vielfältige Einsatzmöglichkeiten, zum Beispiel eine autorisierte spätere Veränderung zur Wahrung von Geschäftsgeheimnissen oder zum Datenschutz: Der Unterzeichner delegiert so beispielsweise über ein private redactable signature scheme nur das nachträgliche Schwärzen (redaction). Dies schränkt die Veränderbarkeit auf das Entfernen von Informationen ein, erlaubt aber wirksam die Wahrung des Datenschutzes oder den Schutz von (Geschäfts)geheimnissen indem diese Informationen irreversibel für Angreifer entfernt werden. Die kryptographische privacy Eigenschaft besagt, dass es nun nicht mehr effizient möglich ist, aus dem geschwärzten Dokument Wissen über die geschwärzten Informationen zu erlangen, auch und gerade nicht für den Signaturprüfer. Die Arbeit geht im Kern der Frage nach, ob ein MSS sowohl die kryptographische Eigenschaft „privacy“ als auch gleichzeitig die Eigenschaften „integrity“ und „accountability“ mit ausreichend hohen Sicherheitsniveaus erfüllen kann. Das Ziel ist es, dass ein MSS gleichzeitig ein solch ausreichend hohen Grad an Sicherheit erfüllt, dass (1) die autorisierten nachträglichen Änderungen zum Schutze von Geschäftsgeheimnissen oder personenbezogenen Daten eingesetzt werden können, und dass (2) dem Dokument, welches mit dem speziellen Signaturverfahren signiert wurde, ein erhöhter Beweiswert beigemessen werden kann. In Bezug auf letzteres stellt die Arbeit sowohl die technischen Vorgaben, welche für qualifizierte elektronische Signaturen (nach Verordnung (EU) Nr. 910/2014) gelten, in Bezug auf die nachträgliche Änderbarkeit dar, als auch konkrete kryptographische Eigenschaften und Verfahren um diese Vorgaben kryptographisch beweisbar zu erreichen. Insbesondere weisen veränderbare Signaturen (MSS) einen anderen Integritätsschutz als traditionelle digitale Signaturen auf: Eine signierte Nachricht darf nachträglich durch eine definierte dritte Partei in einer definierten Art modifiziert werden. Diese sogenannte autorisierte Änderung (authorized modification) kann auch ohne Kenntnis des geheimen Signaturschlüssels des Unterzeichners durchgeführt werden. Bei der Verifikation der digitalen Signatur durch den Signaturprüfer bleibt der ursprüngliche Signierende und dessen Einwilligung zur autorisierten Änderung kryptographisch verifizierbar, auch wenn autorisierte Änderungen vorgenommen wurden. Die Arbeit umfasst folgende Bereiche: 1. Analyse der Rechtsvorgaben zur Ermittlung der rechtlich relevanten technischen Anforderungen hinsichtlich des geforderten Integritätsschutzes (integrity protection) und hinsichtlich des Schutzes von personenbezogenen Daten und (Geschäfts)geheimnissen (privacy protection), 2. Definition eines geeigneten Integritäts-Begriffes zur Beschreibung der Schutzfunktion von existierenden malleable signatures und bereits rechtlich anerkannten Signaturverfahren, 3. Harmonisierung und Analyse der kryptographischen Eigenschaften existierender malleable signature Verfahren in Hinblick auf die rechtlichen Anforderungen, 4. Entwicklung neuer und beweisbar sicherer kryptographischer Verfahren, 5. abschließende Bewertung des rechtlichen Beweiswertes (probative value) und des Datenschutzniveaus anhand der technischen Umsetzung der rechtlichen Anforderungen. Die Arbeit kommt zu dem Ergebnis, dass zunächst einmal jedwede (autorisierte wie auch unautorisierte) Änderung von einem kryptographisch sicheren malleable signature Verfahren (MSS) ebenfalls erkannt werden muss um Konformität mit Verordnung (EU) Nr. 910/2014 (eIDAS) zu erlangen. Eine solche Änderungserkennung durch die der Signaturprüfer, ohne Zuhilfe weiterer Parteien oder Geheimnisse, die Abwesenheit von autorisierten und unautorisierten Änderungen erkennt wurde im Rahmen der Arbeit entwickelt (non-interactive public accountability (PUB)). Diese neue kryptographische Eigenschaft wurde veröffentlicht und bereits von Arbeiten Anderer aufgegriffen. Des Weiteren werden neue kryptographische Eigenschaften und redactable signature und sanitizable signature Verfahren vorgestellt, welche zusätzlich zu dieser Änderungerkennung einen starken Schutz gegen die Aufdeckung des Orginals ermöglichen. Werden geeignete Eigenschaften erfüllt so wird für bestimmte Fälle ein technisches Schutzniveau erzielt, welches mit klassischen Signaturen vergleichbar ist. Damit lässt sich die Kernfrage positiv beantworten: Private MSS können ein Integritätsschutzniveau erreichen, welches dem rechtlich anerkannter digitaler Signaturen technisch entspricht, aber dennoch nachträgliche Änderungen autorisieren kann, welche einen starken Schutz gegen die Wiederherstellung des Orginals ermöglichen. N2 - This thesis distills technical requirements for an increased probative value and data protection compliance, and maps them onto cryptographic properties for which it constructs provably secure and especially private malleable signature schemes (MSS). MSS are specialised digital signature schemes that allow the signatory to authorize certain subsequent modifications, which will not negatively affect the signature verification result. Legally, regulations such as European Regulation 910/2014 (eIDAS), ‘follow-up’ to longstanding Directive 1999/93/EC, describe the requirements in technology-neutral language. eIDAS states that, when a digital signature meets the full requirements it becomes a qualified electronic signature and then it “[...] shall have the equivalent legal effect of a handwritten signature [...]” [Art. 25 Regulation 910/2014]. The question of what legal effect this has with regards to the probative value that is assigned is actually not determined in EU Regulation 910/2014 but in European member state law. This thesis concentrates in its analysis on the — in this respect detailed — German Code of Civil Procedure (ZPO). Following the ZPO, a signature awards the signed document with at least a high probative value of prima facie evidence. For signed documents of official authority the ZPO’s statutory rules even award evidence with a legal presumption of authenticity. This increased probative value is also awarded to electronic documents bearing electronic signatures when those conform to the eIDAS requirements. The requirements centre around the technical security goals of integrity and accountability. Technical mechanisms use cryptographic means to detect the absence of unauthorized modifications (integrity) and allow to authenticate the signed document’s signatory (accountability). However, the specialised malleable signature schemes’ main advantage is a cryptographic property termed privacy: An authorized subsequent modification will protect the confidentiality of the modified original. Moreover, the MSS will retain a verifiable signature if only authorized modifications were carried out. If these properties are reached with provable security the schemes are called private malleable signature schemes. This thesis analyses two forms of MSS discussed in existing literature: Redactable signature schemes (RSS) which allow subsequent deletions, and sanitizable signature schemes (SSS) which allow subsequent edits. These two forms have many application scenarios: A signatory can delegate that a later redaction might take place while retaining the integrity and authenticity protection for the still remaining parts. The verification of a signature on a redacted or sanitized document still enables the verifying entity to corroborate the signatory’s identity with the help of flanking technical and organisational mechanisms, e.g. a trusted public key infrastructure. The valid signature further corroborates the absence of unauthorized changes, because the MSS is still cryptographically protecting the signed document from undetected unauthorized changes inflicted by adversaries. Due to the confidentiality protection for the overwritten parts of the document following from cryptographic privacy the sanitization and redaction can be used to safeguard personal data to comply with data protection regulation or withhold trade-secrets. The research question is: Can a malleable signature scheme be private to be compliant with EU data protection regulation and at the same time fulfil the integrity protection legally required in the EU to achieve a high probative value for the data signed? Answering this requires to understand the protection requirements in respect to accountability and integrity rooted in Regulation 910/2014 and related legal texts. This thesis has analysed the previous Directive 1999/93/EC as well as German SigG and SigVO or UK and US laws. Besides that, legal texts, laws and regulations for the protection requirements of personal data (or PII) have been analysed to distill the confidentiality requirements, e.g. the German BDSG or the EU Regulation 2016/679 (GDPR). Moreover, an answer to the research question entails understanding the relevant difference between regular digital signature schemes, like RSASSA-PSS from PKCS-v2.2 [422], which are legally accepted mechanisms for generating qualified electronic signatures and MSS for which the legal status was completely unknown before the thesis. Especially as MSS allow the authorized entity to adapt the signature, such that it is valid after the authorized modification, without the knowledge or use of the signatory’s signature generation key. On verification of an MSS the verifying entity still sees a valid signature technically appointing the legal signatory as the origin of a document, which might — however — have undergone authorized modifications after the signature was applied. The thesis documents the results achieved in several domains: 1. Analysis of legal requirements towards integrity protection for an increased probative value and towards the confidentiality protection for use as a privacy-enhancing-technique to comply with data protection regulation. 2. Definition of a suitable terminology for integrity protection to capture (a) the differences between classical and malleable signature schemes, (b) the subtleties among existing MSS, as well as (c) the legal requirements. 3. Harmonisation of existing MSS and their cryptographic properties and the analysis of their shortcomings with respect to the legal requirements. 4. Design of new cryptographic properties and their provably secure cryptographic instantiations, i.e., the thesis proposes nine new cryptographic constructions accompanied by rigorous proofs of their security with respect to the formally defined cryptographic properties. 5. Final evaluation of the increased probative value and data-protection level achievable through the eight proposed cryptographic malleable signature schemes. The thesis concludes that the detection of any subsequent modification (authorized and unauthorized) is of paramount legal importance in order to meet EU Regulation 910/2014. Further, this thesis formally defined a public form of the legally requested integrity verification which allows the verifying entity to corroborate the absence of any unauthorized modifications with a valid signature verification while simultaneously detecting the presence of an authorized modification — if at least one such authorized modification has subsequently occurred. This property, called non-interactive public accountability (PUB), has been formally defined in this thesis, was published and has already been adopted by the academic community. It was carefully conceived to not negatively impact a base-line level of privacy protection, as non-interactive public accountability had to destroy an existing strong privacy notion of transparency, which was identified as a hinderance to legal equivalence arguments. With RSS and SSS constructions that meet these properties, the thesis can give a positive answer to the research question: Private MSS can reach a level of integrity protection and guarantee a level of accountability comparable to that of technical mechanisms that are legally accepted to generate qualified electronic signatures giving an increased probative value to the signed document, while at the same time protect the overwritten contents’ confidentiality. KW - Integrity KW - Privacy KW - Redactable Signature Scheme (RSS) KW - Sanitizable Signature Scheme (RSS) KW - eIDAS KW - Integrität KW - Elektronische Unterschrift KW - Beweiswürdigung KW - Datenschutz KW - Vertraulichkeit Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5823 ER - TY - JOUR A1 - Maletzky de García, Martina T1 - Bridging the state and market logics of refugee labour market inclusion – a comparative study on the inclusion activities of German professional chambers JF - Comparative Migration Studies N2 - Due to their high numbers, refugees’ labour market inclusion has become an important topic for Germany in recent years. Because of a lack of research on meso-level actors’ influences on labour market inclusion and the transcendent role of organizations in modern societies, the article focuses on the German professional chambers’ role in the process of refugee inclusion. The study shows that professional chambers are intermediaries between economic actors, the government and refugees, which all follow their own logics and ideas of labour market inclusion (the state, the market and the community logic). The measures taken by professional chambers mainly reflect a governmental logic (to reduce refugee unemployment) combined with a market logic (to provide human resources to economic actors). A community logic (altruism) only comes into play as a rather unintended consequence of measures addressing the other two logics. The measures of two types of professional chambers are compared. Close similarities between them reveal that the organization type is of theoretical relevance to explain the type of measures organizations opt for. Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2021100821553693088701 ER - TY - JOUR A1 - Anagnostopoulos, Nikolaos Athanasios A1 - Teymuri, Benyamin A1 - Serati, Reza A1 - Rasti, Mehdi ED - Xie, Bin ED - Wang, Ning ED - Gu, Yi ED - Stefanidis, Angelos T1 - LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm JF - Sensors N2 - In the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs). LoRa is a communication protocol that can cover a wide range with low energy consumption. To evaluate the efficiency of the LoRa Wide-Area Network (LoRaWAN), three criteria can be considered, namely, the Packet Delivery Rate (PDR), Energy Consumption (EC), and coverage area. A set of transmission parameters have to be configured to establish a communication link. These parameters can affect the data rate, noise resistance, receiver sensitivity, and EC. The Adaptive Data Rate (ADR) algorithm is a mechanism to configure the transmission parameters of EDs aiming to improve the PDR. Therefore, we introduce a new algorithm using the Multi-Armed Bandit (MAB) technique, to configure the EDs’ transmission parameters in a centralized manner on the Network Server (NS) side, while improving the EC, too. The performance of the proposed algorithm, the Low-Power Multi-Armed Bandit (LP-MAB), is evaluated through simulation results and is compared with other approaches in different scenarios. The simulation results indicate that the LP-MAB’s EC outperforms other algorithms while maintaining a relatively high PDR in various circumstances. KW - Internet of Things (IoT) KW - LoRaWAN KW - adaptive configuration KW - machine learning KW - reinforcement learning Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11853 SN - 1424-8220 VL - 23 IS - 4 PB - MDPI CY - Basel, Switzerland ER - TY - JOUR A1 - Herbold, Steffen A1 - Hautli‑Janisz, Annette A1 - Heuer, Ute A1 - Kikteva, Zlata A1 - Trautsch, Alexander T1 - A large‑scale comparison of human‑written versus ChatGPT‑generated essays JF - Scientific Reports N2 - ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13961 VL - 13 PB - Springer Nature ER - TY - JOUR A1 - Sengl, Michael A1 - Heinke, Elfi T1 - Teaching Journalism Literacy in Schools: The Role of Media Companies as Media Educators in Germany JF - Media and Communication N2 - German journalism is facing major challenges including declining circulation, funding, trust, and political allegations of spreading disinformation. Increased media literacy in the population is one way to counter these issues and their implications. This especially applies to the sub‐concept of journalism literacy, focusing on the ability to consume news critically and reflectively, thus enabling democratic participation. For media companies, promoting journalism literacy seems logical for economic and altruistic reasons. However, research on German initiatives is scarce. This article presents an explorative qualitative survey of experts from seven media companies offering journalistic media education projects in German schools, focusing on the initiatives’ content, structure, and motivation. Results show that initiatives primarily aim at students and teachers, offering mostly education on journalism (e.g., teaching material) and via journalism (e.g., journalistic co‐production with students). While these projects mainly provide information on the respective medium and journalistic practices, dealing with disinformation is also a central goal. Most initiatives are motivated both extrinsically (e.g., reaching new audiences) and intrinsically (e.g., democratic responsibility). Despite sometimes insufficient resources and reluctant teachers, media companies see many opportunities in their initiatives: Gaining trust and creating resilience against disinformation are just two examples within the larger goal of enabling young people to be informed and opinionated members of a democratic society. KW - disinformation KW - journalism literacy KW - ournalistic media education KW - media literacy KW - news media literacy Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12499 VL - volume 11 IS - issue 2 SP - 53 EP - 63 PB - Cogitatio Press CY - Lisbon, Portugal ER - TY - JOUR A1 - Caspari-Sadeghi, Sima T1 - Learning assessment in the age of big data: Learning analytics in higher education JF - Cogent Education N2 - Data-driven decision-making and data-intensive research are becoming prevalent in many sectors of modern society, i.e. healthcare, politics, business, and entertainment. During the COVID-19 pandemic, huge amounts of educational data and new types of evidence were generated through various online platforms, digital tools, and communication applications. Meanwhile, it is acknowledged that educa-tion lacks computational infrastructure and human capacity to fully exploit the potential of big data. This paper explores the use of Learning Analytics (LA) in higher education for measurement purposes. Four main LA functions in the assessment are outlined: (a) monitoring and analysis, (b) automated feedback, (c) prediction, prevention, and intervention, and (d) new forms of assessment. The paper con-cludes by discussing the challenges of adopting and upscaling LA as well as the implications for instructors in higher education. KW - Big data KW - learning analytics KW - technology-enhanced assessment Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12236 VL - 2023 IS - Volume 110, issue 1 PB - Taylor & Francis ER - TY - JOUR A1 - Becher, Stefan A1 - Gerl, Armin ED - Sarne, Giuseppe Maria Luigi ED - Ma, Jianhua ED - Rosaci, Domenico ED - Srivastava, Gautam T1 - ConTra Preference Language: Privacy Preference Unification via Privacy Interfaces JF - Sensors N2 - After the enactment of the GDPR in 2018, many companies were forced to rethink their privacy management in order to comply with the new legal framework. These changes mostly affect the Controller to achieve GDPR-compliant privacy policies and management.However, measures to give users a better understanding of privacy, which is essential to generate legitimate interest in the Controller, are often skipped. We recommend addressing this issue by the usage of privacy preference languages, whereas users define rules regarding their preferences for privacy handling. In the literature, preference languages only work with their corresponding privacy language, which limits their applicability. In this paper, we propose the ConTra preference language, which we envision to support users during privacy policy negotiation while meeting current technical and legal requirements. Therefore, ConTra preferences are defined showing its expressiveness, extensibility, and applicability in resource-limited IoT scenarios. In addition, we introduce a generic approach which provides privacy language compatibility for unified preference matching. KW - privacy KW - preference language KW - legal factors KW - GDPR KW - usability Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11218 SN - 1424-8220 VL - 22 IS - 14 PB - MDPI CY - Basel, Switzerland ER - TY - JOUR A1 - Voigt, Brigitte T1 - EU regulation of gene-edited plants — A reform proposal JF - Frontiers in Genome Editing N2 - This article presents a proposal on how the European Union’s regulatory framework on genetically modified (GM) plants should be reformed in light of recent developments in genomic plant breeding techniques. The reform involves a three-tier system reflecting the genetic changes and resulting traits of GM plants. The article is intended to contribute to the ongoing debate over how best to regulate plant gene editing techniques in the EU. KW - gene editing KW - reform KW - EU regulation KW - genetically modified plant KW - new genomic techniques Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11849 SN - 2673-3439 VL - 5 PB - Frontiers Media S.A. CY - Lausanne, Switzerland ER - TY - JOUR A1 - Rohlfing, Ingo A1 - Bethke, Felix T1 - How do researchers choose their goals of inference? A survey experiment on the effects of the state of research and method preferences on the choice between research goals N2 - In empirical research, scholars can choose between an exploratory causes-of-effects analysis, a confirmatory effects-ofcauses approach, or a mechanism-of-effects analysis that can be either exploratory or confirmatory. Understanding the choice between the approaches is important for two reasons. First, the added value of each approach depends on how much is known about the phenomenon of interest at the time of the analysis. Second, because of the specializations of methods, there are benefits to a division of labor between researchers who have expertise in the application of a given method. In this preregistered study, we test two hypotheses that follow from these arguments. We theorize that exploratory research is chosen when little is known about a phenomenon and a confirmatory approach is taken when more knowledge is available. A complementary hypothesis is that quantitative researchers opt for confirmatory designs and qualitative researchers for exploration because of their academic socialization. We test the hypotheses with a survey experiment of more than 900 political scientists from the United States and Europe. The results indicate that the state of knowledge has a significant and sizeable effect on the choice of the approach. In contrast, the evidence about the effect of methods expertise is more ambivalent. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12739 IS - Research & Politics, 10(2) SP - 1 EP - 7 PB - Sage Publications ER - TY - JOUR A1 - Haupt, Harry A1 - Fritsch, Markus ED - Rykov, Vladimir V. T1 - Quantile Trend Regression and Its Application to Central England Temperature JF - Mathematics N2 - The identification and estimation of trends in hydroclimatic time series remains an important task in applied climate research. The statistical challenge arises from the inherent nonlinearity, complex dependence structure, heterogeneity and resulting non-standard distributions of the underlying time series. Quantile regressions are considered an important modeling technique for such analyses because of their rich interpretation and their broad insensitivity to extreme distributions. This paper provides an asymptotic justification of quantile trend regression in terms of unknown heterogeneity and dependence structure and the corresponding interpretation. An empirical application sheds light on the relevance of quantile regression modeling for analyzing monthly Central England temperature anomalies and illustrates their various heterogenous trends. Our results suggest the presence of heterogeneities across the considered seasonal cycle and an increase in the relative frequency of observing unusually high temperatures. KW - temperature KW - trend modeling KW - seasonality KW - heterogeneity KW - quantile regression KW - C02 KW - C14 KW - C18 KW - C22 KW - Q54 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10382 SN - 2227-7390 VL - 10 IS - 3 PB - MDPI ER - TY - JOUR A1 - Caspari-Sadeghi, Sima T1 - Applying Learning Analytics in Online Environments: Measuring Learners’ Engagement Unobtrusively JF - Frontiers in Education N2 - Prior to the emergence of Big Data and technologies such as Learning Analytics (LA), classroom research focused mainly on measuring learning outcomes of a small sample through tests. Research on online environments shows that learners’ engagement is a critical precondition for successful learning and lack of engagement is associated with failure and dropout. LA helps instructors to track, measure and visualize students’ online behavior and use such digital traces to improve instruction and provide individualized support, i.e., feedback. This paper examines 1) metrics or indicators of learners’ engagement as extracted and displayed by LA, 2) their relationship with academic achievement and performance, and 3) some freely available LA tools for instructors and their usability. The paper concludes with making recommendations for practice and further research by considering challenges associated with using LA in classrooms. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10440 SP - 1 EP - 6 ER - TY - JOUR A1 - Augsten, Pauline A1 - Glassner, Sebastian A1 - Rall, Jenni T1 - The Myth of Responsibility: Colonial Cruelties and Silence in German Political Discourse JF - Global Studies Quarterly N2 - Germany is considered a role model for dealing with past mass atrocities. In particular, the social reappraisal of the Holocaust is emblematic of this. However, when considering the genocide on the Herero and Nama in present-day Namibia, it is puzzling that an official recognition was only pronounced after almost 120 years, in May 2021. For a long time, silence surrounded this colonial cruelty in German political discourse. Although the discourse on German responsibility toward Namibia emerged after the end of World War II, it initially appeared detached from the genocide. That silence on colonial atrocities is to be considered a cruelty itself. Studies on silence have been expanding and becoming richer. Building on these works, the paper sets two goals: First, it advances the theorization of silence by producing a new typology, which is then integrated into discourse-bound identity theory. Second, it applies this theory to the analysis of the silencing and later acknowledging of the genocide on the Herero and Nama by German political elites. To this end, Bundestag debates, official documents, and statements by relevant political actors are analyzed in the period from 1980 to 2021. The results reveal the dynamics between hegemonic and counter-hegemonic discursive formations, how those are shifting in a period of 40 years, and what role silence plays in it. Beyond our emphasis on the genocide on the Herero and Nama, our findings might benefit future studies as the approach proposed in this paper can make silence a tangible research object for global studies. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10437 SP - 1 EP - 12 ER - TY - JOUR A1 - Mandarawi, Waseem A1 - Rottmeier, Jürgen A1 - Rezaeighale, Milad A1 - de Meer, Hermann T1 - Policy-Based Composition and Embedding of Extended Virtual Networks and SFCs for IIoT JF - Algorithms N2 - The autonomic composition of Virtual Networks (VNs) and Service Function Chains (SFCs)based on application requirements is significant for complex environments. In this paper, we use graph transformation in order to compose an Extended Virtual Network (EVN) that is based on different requirements, such as locations, low latency, redundancy, and security functions. The EVN can represent physical environment devices and virtual application and network functions. We build a generic Virtual Network Embedding (VNE) framework for transforming an Application Request (AR) to an EVN. Subsequently, we define a set of transformations that reflect preliminary topological, performance, reliability, and security policies. These transformations update the entities and demands of the VN and add SFCs that include the required Virtual Network Functions (VNFs). Additionally, we propose a greedy proactive heuristic for path-independent embedding of the composed SFCs. This heuristic is appropriate for real complex environments, such as industrial networks. Furthermore, we present an Industrail Internet of Things (IIoT) use case that was inspired by Industry 4.0 concepts,in which EVNs for remote asset management are deployed over three levels; manufacturing halls and edge and cloud computing. We also implement the developed methods in Alevin and show exemplary mapping results from our use case. Finally, we evaluate the chain embedding heuristic while using a random topology that is typical for such a use case, and show that it can improve the admission ratio and resource utilization with minimal overhead. KW - NFV KW - SFC KW - VNE KW - IIoT Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8488 SN - 1999-4893 VL - 13 IS - 9 PB - MDPI ER - TY - JOUR A1 - Schiffbeck, Adrian T1 - The Spiritual “Freelancers”: Young People, Religiosity and Community Problem Solving JF - SCIENTIA MORALITAS - International Journal of Multidisciplinary Research N2 - Earlier research showed that religion is related to participation among adolescents. It emphasized the effects of belonging (affiliation to groups and traditions) on community service among Western populations. This article takes one step further and focuses on religiosity as a potential motivation for community problem solving during adolescence and young adulthood, in the Eastern European Orthodox cultural setting. Data comes from several semistructured interviews with participants in a civic project conducted in the city of Timisoara (Romania). Findings indicated a low impact of the social religious component on engagement. The cognitive dimension of belief and the emotional bonding (prayer, ritual connection to the higher reality) function as indirect motivators, through the moral element of behavior. Results also showed a privatization of spiritual life at young adults (the invisible religion): estrangement from doctrines and the development of an individualistic type of morality, meant to drive volunteer activities further. Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9268 VL - 2020 IS - Volume 5, no. 1 SP - 131 EP - 152 ER - TY - JOUR A1 - Fritsch, Markus T1 - Data for modeling nitrogen dioxide concentration levels across Germany JF - Data in Brief N2 - The described secondary data provide a comprehensive basis for modeling conditional mean nitrogen dioxide (NO2) concentration levels across Germany. Besides concentration levels, meta data on monitoring sites from the German air quality monitoring network, geocoordinates, altitudes, and data on land use and road lengths for different types of roads are provided. The data are based on a grid of resolution 1 × 1 km, which is also included. The underlying raw data are open access and were retrieved from different sources. The statistical software R was used for (pre-)processing the data and all codes are provided in an online repository. The data were employed for modeling mean annual NO2 concentration levels in the paper "Agglomeration and infrastructure effects in land use regression models for air pollution - Specification, estimation, and interpretations" by Fritsch and Behm (2021). KW - Air pollution; Corine land cover; EEA air quality data; Nitrogen dioxide Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10086 ER - TY - THES A1 - Mußotter, Marlene T1 - On nation, homeland, and democracy: Toward a novel three-factor measurement model for nationalism and patriotism BT - revisiting the nationalism-patriotism distinction N2 - The nationalism-patriotism distinction is one of the most influential distinctions in the field of political psychology. While frequently used, the distinction suffers from a number of shortcomings that have hitherto been devoted little attention to. This dissertation aims to contribute to fill this research gap by systematically addressing these pitfalls. Notably, it does not abandon the binary distinction as such, but aims to further refine it. Thoroughly revisiting the nationalism-patriotism distinction, it synthesises the field's two predominant research traditions, i.e. the work of Kosterman and Feshbach (1989) in the U.S. and the one of Blank and Schmidt (2003) in Germany, that have not been brought into dialogue. In so doing, and engaging with research on attachment, it calls for a more nuanced triad of attachments: nationalism, that revolves around the nation; patriotism, that refers to the homeland; and democratic patriotism with democracy as its object of attachment. In line with this triad, it introduces a novel three-factor measurement model that has been validated in three studies in Germany. Overall, the dissertation underlines the need to approach ambiguous and complex concepts such as nationalism and patriotism in a more theoretically consistent way before operationalizing them in a rigorous manner. KW - Nationalismus KW - Patriotismus KW - Demokratie KW - Nation KW - Identität Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13781 ER - TY - THES A1 - Püllen, Dominik T1 - Holistic Security Engineering for Software-Defined Vehicles N2 - With the increasing use of digital technologies in the automotive sector, the traditional automobile is undergoing a structural transformation, requiring new technologies and enabling innovative mobility concepts. In particular, the ability to drive automatically or even fully autonomously, update control software, and remain connected to the environment allows attackers to infiltrate highly critical vehicle systems and take control without adequate protection. Once not only individual vehicles but entire fleets are dominated by software, cyberattacks could disrupt a significant portion of the infrastructure and expose passengers to substantial risks. This work follows a holistic approach to protecting highly automated software-defined vehicles from cyberattacks by designing and implementing security concepts in the main phases of a vehicle's lifecycle. We use SAE level 4 prototype vehicles to evaluate our proposed techniques. We start with a systematic security requirement analysis using the ISA-62443 standard series, demonstrating how threats can be identified in a collaborative, hierarchical process and how the resulting security risks impact the software and hardware architecture of a self-driving vehicle. We show how this analysis process results in concrete requirements whose consideration reduces the overall security risk to a tolerable level. Subsequently, we develop technical solutions for selected requirements. We begin by securing the CAN and FlexRay legacy protocols, which we foresee being used in specific areas of SDV in a transitional period despite technological changes. To enable vehicle-wide security management, we address the management and distribution of cryptographic keys within such networks, mainly focusing on resource-constrained devices. We propose using lightweight implicit certificates for deriving cryptographic group keys that can be used in CAN networks. Additionally, we demonstrate how the slot-based frame structure of the FlexRay protocol allows for efficient "multi-slot" authentication, for which we calculate cryptographic keys using hash-based key chains. SDV use Ethernet-based communication protocols and custom middleware stacks to transmit large amounts of data in real-time. We develop a three-stage security process for the novel ASOA, which enables the development and central orchestration of system-agnostic functional software components on embedded systems and HPC platforms. After the central specification of the security architecture at the data flow level, security tokens are automatically calculated and distributed for runtime protection of the service-oriented, DDS-based data transmission. Our process ensures the strict separation of function and system knowledge, allowing for cost-effective and adaptable security architecture management. The evaluation in four self-driving, software-defined vehicles demonstrates an average runtime overhead of approximately 5.71%. As the initial risk analysis and actual cyberattacks have shown, protective measures against the compromise of control units must be taken alongside communication security. To address this, we develop a method for verifying and validating the software integrity of control units. A governmental third party confirms a measurement through a digital certificate, proving the examined vehicle's trustworthiness and suitability for participation in automated traffic. In the final step of this work, we present an assessment scheme that allows software-defined vehicles to evaluate security incidents during operation in terms of their maximum expected damage and initiate appropriate countermeasures. We follow the ISO/SAE 21434 standard and model attack paths using a graph representing dependencies among internal vehicle assets to account for the propagation effects of cyberattacks. The assessment of a security incident considers not only the probability of individual attack paths but also the vehicle context. Our practical evaluation demonstrates that we can detect, report, and assess security incidents below the human reaction time in the earlier mentioned prototype vehicles. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14497 ER - TY - RPRT A1 - Azcuy Becquer, Claudia A1 - Heinrich, Horst-Alfred T1 - Data documentation on history visualisations on the covers of all issues of Der Spiegel between 1965 and 2021 N2 - To answer the research question, all SPIEGEL covers from 1965 to 2021 were examined for a reference to history topics. The report documents the assignments of the 533 covers recorded to the categories of history narrative, politics of memory and politics of the past. Main article: https://doi.org/10.3167/jemms.2023.150107 KW - picture-type analysis KW - cover analysis KW - memory politics KW - Vergangenheitspolitik KW - SPIEGEL cover Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12678 ER - TY - RPRT A1 - Heinrich, Horst-Alfred T1 - Test of different structural equation models describing the relationship between nationalism, patriotism, and anti-immigration attitudes: documentation of an empirical study N2 - This paper aims at the documentation of the results of a study dealing with the relationship between in-group and out-group attitudes. The research enquires into the question whether nationalistic, patriotic, and anti-immigration attitudes are only correlated with each other or whether there is a causal relationship. The presentation of the Mplus output together with the correlation and covariance matrices will allow the readers to control the results and, if they want to do it, to test alternative models. KW - Nationalismus KW - Nationale Identität KW - Ausländerfeindlichkeit KW - Patriotismus KW - Strukturgleichungsmodell KW - Theorietest Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5502 ER - TY - THES A1 - Donner, Eva Katharina T1 - 4 Essays on Sustainable Development and Organizations’ Response to Stakeholder’s Expectations N2 - As integral part of the society, environment, and economy, the organization’s survival and growth depend on legitimacy which reflects the social support and acceptance of its stakeholders, the institutions and individuals it interacts with. Organizations interact with internal and external stakeholder groups which not necessarily have the same legitimacy expectations. Due to several reasons, such as the adoption of new laws or the invention of new technologies, stakeholders’ expectations of legitimate actions can change in the course of time. Thus, to respond to changing stakeholder’s expectations, as integral part of a changing environment, is complex. The main objective of this dissertation is to answer the research question: “How do organizations respond to changing stakeholder’s expectations in context of sustainable development?”. Building on two empirical settings with four essays, this dissertation provides insights into organizations’ responses as reaction to changing stakeholder expectations with regard to sustainable development. In particular, it takes a closer look on the interplay between internal stakeholders, such as employees, and external stakeholders, such as politics or customers. The first essay summarizes the organizational factors for the implementation of a research data management (RDM) system within higher education institutes (HEI) and how they interact with each other. Based on Leavitt’s (1965) classical model of organizational change, the essay provides an overview about the interrelation between the individual components that make up an RDM system. The second essay investigates how early career researchers within HEI make use of different respond strategies to the state, market, professional, and community logic in context of RDM. It provides insights how employees deal with changing environmental conditions and the organization’s response to them. The third essay analyses the shift from voluntary to mandatory sustainability reporting and demonstrates different strategies organizations adopt to respond to it. The fourth essay incorporates the findings of the third essay and investigates which factors influence the organization’s strategy to respond. It focuses on the interface between the top management team and the chief executive officer. This dissertation makes at least two overall contributions to management and organization studies research. First, it emphasizes the role of different stakeholder groups and their impact on the organizations’ activities. Second, this dissertation shows the value of connecting different theoretical approaches such as institutional logics, upper echelon theory, and organizational transparency research in context of organizational legitimacy and demonstrates that a nuanced view is necessary to understand the concept of organizational legitimacy. This cumulative dissertation is structured as follows. Part A is an introduction to the study organizational legitimacy. Part B contains the four essays. KW - Legitimacy KW - Strategisches Management Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14516 ER -