open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (443)
- Part of Periodical (138)
- Article (56)
- Book (49)
- Report (21)
- Preprint (16)
- Conference Proceeding (10)
- Master's Thesis (10)
- Other (9)
- Bachelor Thesis (7)
Language
- German (475)
- English (286)
- Multiple languages (5)
- Spanish (3)
- French (1)
Has Fulltext
- yes (770)
Keywords
- Lehrerbildung (29)
- Deutschland (22)
- Österreich (18)
- Universität Passau (13)
- Kultursemiotik (12)
- Maßtheorie (12)
- Mediensemiotik (12)
- Graphenzeichnen (9)
- Computersicherheit (8)
- Marketing (7)
Institute
- Philosophische Fakultät (206)
- Fakultät für Informatik und Mathematik (112)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (70)
- Wirtschaftswissenschaftliche Fakultät (65)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (40)
- Juristische Fakultät (39)
- Zentrum für Lehrerbildung und Fachdidaktik (31)
- Department für Katholische Theologie (28)
- Universitätsbibliothek (28)
- Philosophische Fakultät / Pädagogik (23)
- Sozial- und Bildungswissenschaftliche Fakultät (15)
- Sonstiger Autor der Wirtschaftswissenschaftlichen Fakultät (11)
- Sonstiger Autor der Fakultät für Informatik und Mathematik (10)
- Philosophische Fakultät / Politikwissenschaft (9)
- Philosophische Fakultät / Geographie (7)
- Philosophische Fakultät / Südostasienkunde (7)
- Philosophische Fakultät / Geschichte (6)
- Sonstiger Autor der Philosophischen Fakultät (6)
- Philosophische Fakultät / Romanistik (5)
- Mitarbeiter Lehrstuhl/Einrichtung des Departments für Katholische Theologie (4)
- Philosophische Fakultät / Germanistik (4)
- Philosophische Fakultät / Kunsterziehung, Kunstgeschichte (4)
- Geistes- und Kulturwissenschaftliche Fakultät (3)
- Sonstiger Autor der Juristischen Fakultät (3)
- Sonstiger Autor des Departments für Katholische Theologie (3)
- Philosophische Fakultät / Soziologie (2)
- Sonstige Einrichtungen der Philosophischen Fakultät (2)
- Institut für IT-Sicherheit und Sicherheitsrecht (ISL) (1)
- Mitarbeiter Lehrstuhl/Einrichtung der philosophischen Fakultät (1)
- Philosophische Fakultät / Allgemeine Literaturwissenschaft (1)
- Philosophische Fakultät / Allgemeine und Indogermanische Linguistik (1)
- Philosophische Fakultät / Musikwissenschaft (1)
- Philosophische Fakultät / Neuere und Neueste Geschichte Osteuropas und seiner Kulturen (1)
- Philosophische Fakultät / Philosophie (1)
- Universität Passau (1)
- Zentrale Einrichtungen (1)
Die Publikation stellt die Ausgestaltung und Zunahme der staatlichen Verwaltung und Leitung der Bergbaubetriebe im Zeitablauf auch vor dem Hintergrund der sich ändernden Landesherrschaften bis hin zum sogenannten Direktionsprinzip anschaulich dar. Mit dem Berggesetz von 1865 wurde der staatliche Einfluss auf ein Mindestmaß zurückgeführt.
Die Bergrechtsreformen Mitte des 19. Jh. in Preußen hatten weitreichende Auswirkungen auf die Bergarbeiterschaft. In Nordrhein-Westfalen wurde 1994 mit dem Wechsel von der Gewerbeaufsicht zur Arbeitsschutzverwaltung ein ähnlicher Paradigmenwechsel vollzogen. Die Abhandlung untersucht und vergleicht die Auswirkungen dieser Rechtsänderungen.
Mit der Bergrechtsrevision in Österreich in der Mitte des 19. Jahrhunderts war zunächst beabsichtigt, das Hüttenwesen aus dem Bergrecht zu entfernen und dem allgemeinen Gewerberecht zuzuordnen. Hiergegen erhob sich heftiger Widerstand der Berg- und Hüttenwerksbesitzer. Die Abhandlung stellt den Verlauf bis zum Erlass des Berggesetzes dar.
Habsburg versus Preußen
(2016)
Zweck dieser Arbeit ist es, näher auf das Verhältnis der beiden großen Staaten Österreich und Preußen in 'Deutschland' gerade auch hinsichtlich der persönlichen Animositäten der handelnden Akteure einzugehen und daraus abzuleiten, warum unterschiedliche Ansichten, Erwartungen, Mentalitäten und Vorurteile, gepaart mit dem jeweiligen Bestreben von Habsburg und Preußen, eine Dominanz unter den deutschen Staaten zu erreichen bzw. die Vorherrschaft des anderen Staates zu verhindern, nicht dazu führen konnten, ein gemeinsames deutsches Reich in Mitteleuropa zu errichten. Die Darstellung des politischen Geschehens wurde daher auf die zentralen Ereignisse reduziert, da die geschichtlichen Abläufe bereits in zahlreichen anderen Veröffentlichungen erschöpfend ausgearbeitet wurden.
Diese grundlegende Ausarbeitung ist eingebettet in eine umfangreichere Arbeit, die den Vergleich zwischen Preußen und Österreich in der Entwicklung des Bergwesens und der teilweise unterschiedlichen Ausgestaltung des Bergrechts zum Inhalt hat.
Arme sterben früher
(2016)
Im Rahmen einer Untersuchung des Bergarbeiterverhältnisse im 19. Jh. wurde festgestellt, dass der Invaliditätseintritt in immer jüngeren Jahren erfolgte. Auch die Sterblichkeit stieg mit den Jahren. Dies führte zu der Überlegung, ob nicht nur die Arbeitsbedingungen Ursache der Frühinvalidität und höheren Morbidität sind, sondern allein die Tatsache, zu einer bestimmten Bevölkerungsschicht zu gehören. Da dies ex post für die fragliche Zeit schwer nachzuvollziehen war, wurde untersucht, ob auch in heutiger Zeit Krankheit und Morbidität von der gesellschaftlichen Stellung abhängig sein könnte.
Bilingualer Sachfachunterricht gilt wohl als eine der bedeutsamsten Veränderungen im deutschen Schulsystem. Vor allem weiterführende Schulen wie Gymnasien und Realschulen nutzen diese didaktischen Neuerungen zur Weiterentwicklung ihres Schulprofils sowie zur sprachlichen und kognitiven Förderung ihrer Schülerschaft. Aber auch an der Mittelschule insbesondere im M-Zug, kann diese Form des Unterrichts durchaus einen Mehrwert bieten. Mit dieser Arbeit soll die didaktische Wirksamkeit des bilingualen Religionsunterrichts wissenschaftlich erforscht und weiterentwickelt werden. Mittels der praktischen Umsetzung religionspädagogischer und religionsdidaktischer sowie bilingualer Theorien, soll ein Prototyp des bilingualen Religionsunterrichts geschaffen werden, welcher die Fremdsprache Religion zu entschlüsseln vermag. In der iterativen Durchführung bilingualer Religionsunterrichtseinheiten gilt es sodann herauszufinden, inwieweit der Einsatz einer fremden Sprache die Möglichkeit einer kognitiven Durchdringung und Erschließung von Glaubenswissen bieten kann. Zielführend dabei ist jedoch nicht die Erstellung einer empirischen Studie. Vielmehr soll die vorliegende Feldstudie dazu dienen, Hypothesen und Theorien aufzustellen, die dann in einem nächsten Schritt einer empirischen Überprüfung unterzogen werden können.
The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades.
Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest.
Especially in industrial applications the generated datasets increase in size.
Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on.
This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices.
Moreover, artifacts can further distort the content of the slices.
Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently.
Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example.
The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented.
Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists.
Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan.
This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features.
The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives.
These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels.
On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first.
On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist.
Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume.
Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes.
Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions.
Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks.
Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level.
Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer.
This thesis is concerned with proposals that aimed at transforming or reforming the English-speaking world so that it could continue to dominate the world in the future. In the late 19th and early 20th centuries, these ideas emerged from the discourse of Anglo-Saxonism that represented the Anglo-Saxon ‘race’ as the most developed ‘race’ in the world which could, therefore, ‘legitimately’ rule the world. In the later 20th century, an Atlantic discourse developed, which appeared to address further nations in the group of world leaders. However, it seems to rely on similar discursive elements as Anglo-Saxonism, which only includes the English-speaking world. The construction of the respective discourses is examined in late 19th/early 20th century writings by authors broadly associated with the British Empire as well as in Union Now, a 1939 book by U.S.-American Clarence K. Streit. The latter part presents the focus of this thesis. Streit developed a new concept of a world order in which a world state – the Atlantic Union – was to be established. In a first step, it should only be founded by a nucleus of the 15 ‘leading’ democracies in the world and should subsequently be expanded. In addition to the connection between Anglo-Saxonism and Atlanticism which is investigated in Streit's writings, his network and prominence are analyzed, as are the resolutions Streit's supporters introduced into the U.S. Congress and Streit's stance on imperialism.
Forschungsdokumentation
(2021)
Das „Projekt zur Rettung der Flussperlmuschel in Niederbayern“ widmet sich der Aufzucht und Wiederansiedlung der Flussperlmuschel in Gewässern, in denen die zuvor heimische Muschel mittlerweile beinahe ausgestorben ist. Die Umsetzung des Projekts wird von einer wissenschaftlichen Evaluation begleitet, die relevante Akteure, die zur Rettung der Flussperlmuschel beitragen, identifiziert und Spannungsfelder oder Kompromissbereitschaft zwischen diesen aufzeigt (Heinrich / Karlstetter 2021). Die hier vorliegende Forschungsdokumentation umfasst den Projektbericht unterstützende Informationen. Dies sind die Interview-Transkripte, auf deren Basis eine Inhaltsanalyse durchgeführt wurde, Ergebnisse der Inhaltsanalyse, sowie das Anschreiben, mit dem die Interviewten kontaktiert wurden.
Fundamental changes in business-to-business (B2B) buying behavior confront B2B supplier firms with unprecedented challenges. On the one hand, a rising share of industrial buyers demands digitalized offerings and processes from suppliers. Consequently, suppliers are urged to implement digital transformations by expanding the range of both digital offerings and processes. On the other hand, B2B buyers increasingly expect suppliers to provide individually tailored solutions to their idiosyncratic needs. Hence, suppliers are also required to implement non-digital transformations by providing offerings and processes that are customized to each customers’ specific requirements.
The rise of these digital and non-digital transformations calls established knowledge into question. Thus, B2B marketing research and practice are urged to create a comprehensive understanding of digital and non-digital transformations by means of novel and empirically grounded insights and derive actionable response strategies. In respond, my dissertation addresses the overall research question of how B2B supplier firms can successfully implement both digital and non-digital transformations in three individual essays.
In Essay 1, I offer a broader perspective on both digital and non-digital transformations by investigating digital service customization (i.e., the tailoring of digital B2B services to customers’ individual needs). Through a systematic literature review and bibliometric analysis, I outline a comprehensive set of factors that favor the application of distinct digital service customization strategies. Essay 2 represents a deep dive into digital transformations of sales processes. By making use of two rich sets of qualitative interview material from supplier and buyer firms, I identify the challenges resulting for B2B salespeople from the introduction of digital sales channels into personal selling. Moreover, I uncover facilitating mechanisms that sales managers can employ to support salespeople in coping with digital sales channels. Finally, Essay 3 constitutes a deep dive into non-digital transformations. Based on qualitative interview material and survey data from matched sales manager–salesperson dyads, the essay explores how configurations of individual salespeople’s personal and procedural competencies facilitate success at selling customer solutions (i.e., highly customized, performance-oriented offerings comprising products and/or services). The essay shows that successfully selling customized offerings like solutions hinges on salespeople’s unique configurations of present and absent competencies.
In a nutshell, these essays provide three major insights on how B2B suppliers can successfully implement digital and non-digital transformations. First, they underscore that a comprehensive understanding of the origins and spillover effects of transformations is a key prerequisite to successfully implementing them. Second, they unveil that digital and non-digital transformations impact on multiple organizational levels. Third, they point out important resources and capabilities that help suppliers to successfully implement transformations, be they digital or non-digital.
With this dissertation, I make substantial contributions to the broader literature on digital and non-digital transformations in B2B contexts. At the same time, my dissertation provides hands-on implications for managers in B2B supplier firms that are facing fundamental transformations in the marketplace—both digital and non-digital in nature.
Als Ende des 19. und Anfang des 20. Jahrhunderts die ersten Bauernsiedler nach Mittelasien, in Russlands neu erobertes Gebiet kamen, darunter prägend viele deutsche Mennoniten, bestimmten sie von nun an die weitere Entwicklung der Region entscheidend mit. Nichtsdestotrotz sind sie im Schatten der Erfolge der militärischen Eroberung und im Schatten der sowjetischen Periode zu den „toten Seelen“ der Kolonisationsgeschichte schlechthin geworden.
Betrachtet als ein mehrdimensionales soziales Phänomen offenbart die Migration nach Turkestan ein kulturübergreifendes Ziel der bäuerlichen Gemeinschaftsbildung, in dem kulturspezifische Deutungs- und Sinnbildungsprozesse vonstatten gingen. Zahlreiche bäuerliche Projektionen des als richtig angesehenen Lebens im Rahmen der angestrebten Beheimatung liefen zum bäuerlichen Imaginationsraum zusammen. Die bisher in der Literatur einseitig mit religiösen Motiven erklärte Mennoniten-Auswanderung ist ebenfalls die Folge dieser Erscheinungen. Die bäuerlichen Erwartungen konnten sich keinesfalls alle erfüllen, aber sie übten eine reale Wirkung auf die neue Heimat aus: sie bildeten durchaus einen Hintergrund für den Aufstand der indigenen Bevölkerung von 1916, machten aber gleichwohl Turkestan zum Raum des Möglichen, einem stabilen Narrativ, das noch bis in die 30er Jahre der Sowjetzeiten hineinwirkte.
The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks.
Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle.
Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible.
Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks.
Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts.
Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks.
IoT is defined as a paradigm where "things" have sensing, actuating, communicating, and self-configuring abilities, and are connected to each other and to the Internet. Recent advancements in the manufacturing industry have helped to produce embedded devices with various sensors and actuators in mass numbers at a reduced cost. As part of the IoT revolution, everyday devices such as television, refrigerator, cars, even industrial machines are now connected IoT devices. Recent studies have predicted that by 2025 there will be over 75 billion of such IoT devices connected to the Internet.
The providers of IoT based services want to integrate their services to satisfy customer requirements. For example, in the mobility scenario, different mobility solution providers want to offer a multi-modal ticket to their customers jointly. In such a distributed and loosely coupled environment, each owner and stakeholder wants to secure his/her own integrity, confidentiality, and functionality goals. This means that distributed rules and conditions defined by the individual owners must be enforced on the participating entities (e.g., customers or partners using their services). The owners and stakeholders may not necessarily trust each other's actions. Therefore, a mechanism is required that guarantees the rules and conditions specified by the different owners.
Attacks on IoT devices and similar computing systems are increasing and getting more advanced. IoT devices are often constrained, i.e., they have limited processing power, memory, and energy. Security mechanisms designed for traditional computing systems, e.g., computers, servers, or mobile computing devices such as smartphones, may not fit in those constrained IoT devices. Weak security mechanisms and unenforced security measures were one of the main reasons for recent successful attacks on IoT devices and services. As IoT is now used in many sensitive places, including critical infrastructures, securing them becomes more critical than ever. This thesis focuses on developing mechanisms that secure IoT devices and services and enforcing the rules and conditions specified by the owners on entities that want to access owners' resources.
In classical computer systems, security automata are used for specifying security policies and monitoring mechanisms are used for enforcing such policies. For instance, a reference monitor observes and stops the execution when the security policies are about to be violated, thus, the security policies are enforced. To restrict the adversary from using protected IoT devices or services for malicious purposes, it is required to ensure that a workflow must be followed to access the protected resource. In distributed IoT systems where the policies are governed by different owners, each owner would like to specify their rules and conditions in their workflows. The workflows contain tasks that must be performed in a particular order. The goal of this thesis is to develop mechanisms to specify and enforce these workflows in the distributed IoT environment.
This thesis introduces a distributed WFAC framework that restricts the entities to do only what they are allowed to do in a collaborative environment. To gain access to a service protected by the WFAC framework, every workflow participant must prove that he/she is in a particular state of an authorized workflow. Authorized means two things: (a) the owner has authorized the workflow to be executed; (b) the workflow participant is authorized to execute it. This restricts the adversary's access to the devices and its services. The security policies defined by different owners are modeled as workflows and specified using Petri Nets. The policies are then enforced with the help of the WFAC framework which supports error-handling, accountability, integration of practitioner-friendly tools, and interoperability with existing security mechanisms such as OAuth. Thus, the WFAC guarantees the integrity of workflows in a distributed environment.
Hazardous materials (hazmat) have become important goods for satisfying the industrial and customer demand in our modern society. The transportation of these materials is always associated with safety, security and environmental concerns due to the dangerous nature of the cargo. To improve the safety of the transportation process hazmat transportation problems have become a popular research topic in the field of operations research. This thesis contributes to the ongoing research on the hazmat transportation problem. It provides an extensive overview of the existing literature on the hazardous materials transportation problem and offers a new classification extending the existing ones. With particular focus on the hazardous materials vehicle routing problem (HMVRP), this thesis compares different risk models and analyses their influence on the problem outcomes. Additionally, heuristic and meta-heuristic solution procedures are proposed for handling the NP-hard nature of the problem.
For this purpose, four different studies are conducted. Study 1 presents a state of the art literature review including over 300 contributions to the hazmat transportation problem. The historical development of the research field is analyzed and the most important journals are identified. A detailed classification focusing on hazmat transportation on public roads is provided. Furthermore, the study identifies research gaps and presents new research opportunities. Study 2 and 3 investigate the effects of path generation in a realistic urban network on the outcomes of the HMVRP. Additionally, different risk models for the HMVRP are compared and their influence on the problem solutions is analyzed. Study 2 proposes a simple but effective heuristic algorithm to solve the HMVRP with load-independent risk models. Study 3 extends the focus and includes load-dependent risk models. The influence of six different risk models on the solution outcomes of the HMVRP is compared and the tradeoff between risk minimization and the minimization of traveled distance is investigated. For this purpose, more than 1,700 problem instances are solved to optimality using CPLEX. In study 4 a hybrid genetic algorithm (HGA) for solving the HMVRP with a load-depending risk model is proposed. The HGA aims to find pareto-optimal solutions for the bi-objective HMVRP when risk and travel distance are addressed simultaneously. The structure of the HGA is explained and experimental findings are presented.
In conclusion, this thesis contributes to an improved understanding of the general development in the research field of hazmat logistics and the influence of different risk models on the solution outcomes of the HMVRP. Additionally, heuristic solution methods are proposed and tested for finding compromise solutions when the bi-objective case of risk and distance minimization is addressed. Furthermore, this thesis helps new researchers the access to the field of hazmat logistics as it provides a structured overview of the research field while pointing out research gaps. To address some of the identified research gaps, the thesis provides an extensive analysis of the risk modelling approaches. Thereby, it provides new insights to the basic research on risk modelling for the HMVRP. Finally, to overcome the long computation times of large problem instances heuristic solution approaches are proposed.
Obwohl Pompeius dreimal mit dem Konsulat das höchste Amt in Rom innehatte, haftet ihm seit Mommsen das Image eines äußerst erfolgreichen Feldherrn und Organisators, jedoch mäßig begabten, ja unfähigen Politikers an. So hat das politische Handeln des Machthabers bisher nur wenig Beachtung gefunden. In dieser Arbeit werden im Detail die politischen Prozesse der Jahre 54 bis 49 beleuchtet, einem Zeitraum, in dem Pompeius in seinem dritten Konsulat, das er ohne Kollegen ausübte über größten Handlungs- und Gestaltungsspielraum verfügte. Es wird nach seinen politischen Zielen, deren Umsetzung, Kommunikation sowie Reaktionen in der Führungsschicht Roms auf seine Maßnahmen gefragt. Seine Politik im Jahre 52 lässt ein klares politisches Konzept erkennen: Die aufeinander abgestimmten und ineinandergreifenden Maßnahmen zielten auf eine Stabilisierung des bestehenden politischen Systems und die Stärkung der Macht der Häupter des Senats, der Erben der von Sulla eingesetzten Elite. Er vermied dabei die Probleme der Reformen Sullas, indem er die Kollateralschäden seiner Maßnahmen gering hielt und nicht darauf bestand, für sich eine überragende Machtstellung institutionell zu verankern. Diese beabsichtigte er stattdessen in Form von legitimen Leistungsbeziehungen zu Senat und Volk zu etablieren.
Dabei stützte er sich auf seine eigenen, unbestreitbar überragenden Leistungen für die res publica, die Senat und Volk nach dem Herkommen zu Gegenleistungen und damit auch zur Anerkennung einer entsprechenden Machtstellung verpflichteten. Die Untersuchung von Konkurrenzverhältnissen zeigt, dass Pompeius überdies darauf abzielte, mit der Bewältigung innerer Krisen künftige Leistungen für Rom auf seine Person zu monopolisieren. Hierfür garantierte ihm sein fünfjähriges außerordentliches Imperium den Zugriff auf die erforderlichen Machtmittel: Pompeius beabsichtigte durch das kontinuierliche Erbringen von Leistungen als der Patron Roms anerkannt zu werden. Durch Agieren über loyale Magistrate, geschicktes Ausnutzen monarchischer Tendenzen innerhalb der Führungsschicht, mit Hilfe Caesars als starkes Gegengewicht und schließlich mit einem erfolgreichen Krisenmanagement verstand er es immer wieder, den entschlossenen Widerstand führender Senatoren zu brechen, die sich gegen jegliche Entwicklung zur Alleinherrschaft wehrten, sodass er bis zum Frühjahr 50 seiner angestrebten Machtstellung schon sehr nahe kam. Wie instabil diese jedoch war, zeigt sich, als Pompeius für wenige Monate der stadtrömischen Politik fernbleiben musste. Im Machtvakuum, das daraufhin entstand, änderten sich die Konstellationen. Pompeius geriet dadurch in eine anhaltende politische Schwächephase, aus der er sich nach seiner Rückkehr bis zum Ende des Untersuchungszeitraums nicht mehr befreien konnte.
Die Gründe für die Abwanderung aus dem ländlichen Raum sind ebenso wie die Folgen vielfältig. Um entsprechende Gegenmaßnahmen ergreifen und auf die Bedürfnisse der Bewohnerinnen und Bewohner angemessen eingehen zu können, ist ein gemeinsamer Diskurs über die Zukunft ländlicher Räume notwendig. Durch eine entsprechende Bürgerbeteiligung und aktive Gestaltung der Lebensverhältnisse kann die Lebensqualität erhöht und ein Leben dort weiterhin ermöglicht werden.
Damit auf die Wünsche und Bedürfnisse der Bürgerinnen und Bürger der ILE-Nationalparkgemeinden optimal und rechtzeitig eingegangen werden kann, haben sich fünf der sechs Gemeinden an einem Forschungsprojekt zum Thema Innenentwicklung beteiligt. Die Gemeinden Bayerisch-Eisenstein, Frauenau, Neuschönau, Spiegelau und St. Oswald-Riedlhütte haben dabei Unterstützung von der Professur für Regionale Geographie der Universität Passau und dem Amt für ländliche Entwicklung Niederbayern erhalten. Ziel des Projekts ist es, mit Hilfe einer umfangreichen Bürgerbeteiligung Handlungsempfehlungen für die Innenentwicklung in den Bereichen Nahversorgung, Wohnen, Öffentlicher Raum, Gesellschaftliche Teilhabe und Klimaschutz zu geben und so die Gemeinden zukunftsfähig und altersgerecht zu gestalten.
§ 626 Abs. 1 BGB verlangt zwei Voraussetzungen für die außerordentliche Kündigung: die Eignung des Sachverhaltes als wichtigen Grund und die umfassende Interessenabwägung im Einzelfall. Die Verletzung der Nebenleistungspflicht im Arbeitsverhältnis ist je nach ihrer Bedeutung und Qualität als wichtiger Grund nach § 626 Abs. 1 BGB anzusehen. Das Vermögensdelikt ist eine Verletzung der Nebenleistungspflicht. Somit ist das Vermögensdelikt an sich nach seiner Bedeutung und Qualität im Einzelfall auch als wichtiger Grund zur außerordentlichen Kündigung anzusehen.Im Rahmen der Interessenabwägung in § 626 Abs. 1 BGB ergibt sich, ab welchem Grad des Vertrauensverlusts eine fristlose Kündigung gerechtfertigt wird. Die Höhe des Schadens wird bei dem Vertrauensverlust zur fristlosen Kündigung berücksichtigt. Die Analyse der Rechtsprechung des BAG hat gezeigt, dass nur manche Entscheidungen nach dem Fall "Emmely" die Dauer der Betriebszugehörigkeit als Kriterium in der Interessenabwägung in § 626 Abs. 1 BGB berücksichtigen. Aber in den meisten Fällen ist das BAG von seiner bisherigen Rechtsprechung abgerückt: Die fristlose Kündigung war weiterhin bei Vermögensdelikten wirksam, ohne die Interessen angemessen gegeneinander abzuwägen und ohne mildere Mittel zu berücksichtigen. Das BAG sollte aus diesen Gründen seine Auffassung dahingehend revidieren, die Kriterien der Interessen gezielt auszuwählen und gegeneinander abzuwägen.
Ziel der Arbeit ist es, Narrative und diskursive Strategien von politisch rechten Akteuren zu erforschen und aus einer pädagogischen Sicht zu bewerten. Der Fokus der Untersuchung wird hierbei auf Kommunikation innerhalb des Social Web gelegt. Zur Bewältigung des Forschungsvorhabens wird als Methode eine Diskursanalyse angewendet. Die Analyse kommt zu dem Ergebnis, dass sowohl die erzeugten Narrative als auch die verwendeten diskursiven Strategien in allen untersuchten diskursiven Arenen starke Ähnlichkeiten aufweisen – unabhängig davon, ob die Arena vorrangig rechtsextreme, rechtsradikale oder rechtspopulistische Tendenzen besitzt. Dies legt den Verdacht nahe, dass ein Großteil der politisch rechte Akteure immer wiederkehrende Narrative verbreiten und diese mit den stets gleichen diskursiven Strategien untermauern, eine klare diskursive Grenze zwischen Rechtsextremismus, Rechtsradikalismus und Rechtspopulismus scheint es nicht zu geben. Dadurch besteht die Gefahr, dass Rechtsextreme Gedanken zunehmend in die Mitte der Gesellschaft vordringen können.
Das Aufzeichnen der Internetaktivität ist mit der Verknüpfung persönlicher Daten zu einer Schlüsselressource für viele kostenpflichtige und kostenfreie Dienste im Web geworden. Diese Dienste sind zum einen Webanwendungen, wie beispielsweise die von Google bereitgestellten Karten/Navigation oder Websuche, die täglich kostenlos verwendet werden. Zum anderen sind es alle Webseiten, die meist kostenlos Nachrichten oder allgemeine Informationen zu verschiedenen Themen bereitstellen. Durch das Aufrufen und die Nutzung dieser Webdienste werden alle Informationen, die im Webdienst verarbeitet werden, an den Dienstanbieter weitergeben. Dies umfasst nicht nur die im Benutzerkonto des Webdienstes gespeicherte Profildaten wie Name oder Adresse, sondern auch die Aktivität mit dem Webdienst wie das anklicken von Links oder die Verweildauer.
Darüber hinaus gibt es jedoch auch unzählige Drittparteien, welche zumeist im Hintergrund in die Webdienste eingebunden sind und das Benutzerverhalten der kompletten Webaktivität - Webseiten übergreifend - mitspeichern sowie auswerten. Der Einsatz verschiedener, in der Regel für den Benutzer verborgener Techniken, dient dazu das Online-Verhalten der Benutzer genau zu verfolgen und viele sensible Daten zu sammeln. Dieses Verhalten wird als Web-Tracking bezeichnet und wird hauptsächlich von Werbeunternehmen genutzt. Die gesammelten Daten sind oft personenbezogen und eine wertvolle Ressourcen der Unternehmen, um Beispielsweise passend zum Benutzerprofil personalisierte Werbung schalten zu können. Mit der Nutzung dieser personenbezogenen Daten entstehen aber auch weitreichendere Auswirkungen, welche sich unter anderem in Preisanpassungen für Benutzer mit speziellen Profilattributen, wie der Nutzung von teuren Endgeräten, widerspiegeln. Ziel dieser Arbeit ist es die Privatsphäre der Nutzer im Internet zu steigern und die Nutzerverfolgung von Web-Tracking signifikant zu reduzieren. Dabei stellen sich vier Herausforderungen, die jeweils einen Forschungsschwerpunkt dieser Arbeit bilden: (1) Systematische Analyse und Einordnung eingesetzter Tracking-Techniken, (2) Untersuchung vorhandener Schutzmechanismen und deren Schwachstellen,(3) Konzeption einer Referenzarchitektur zum Schutz vor Web-Tracking und (4) Entwurf einer automatisierten Testumgebungen unter Realbedingungen, um die Reduzierung von Web-Tracking in den entwickelten Schutzmaßnahmen zu untersuchen. Jeder dieser Forschungsschwerpunkte stellt neue Beiträge bereit, um einheitlich das übergeordnete Ziel zu erreichen: der Entwicklung von Schutzmaßnahmen gegen die Preisgabe sensibler Benutzerdaten im Internet. Der erste wissenschaftliche Beitrag dieser Dissertation ist eine umfassende Evaluation eingesetzter Web-Tracking Techniken und Methoden, sowie deren Gefahren, Risiken und Implikationen für die Privatsphäre der Internetnutzer. Die Evaluation beinhaltet zusätzlich die Untersuchung vorhandener Tracking-Schutzmechanismen und deren Schwachstellen. Die gewonnenen Erkenntnisse sind maßgeblich für die in dieser Arbeit neu entwickelten Ansätze und verbessern den bisherigen nicht hinreichend gewährleisteten Schutz vor Web-Tracking. Der zweite wissenschaftliche Beitrag ist die Entwicklung einer robusten Klassifizierung von Web-Tracking, der Entwurf einer effizienten Architektur zur Langzeituntersuchung von Web-Tracking sowie einer interaktiven Visualisierung des Auftreten von Web-Tracking im Internet. Dabei basiert der neue Klassifizierungsansatz, um Tracking zu identifizieren, auf der Entropie Messung des Informationsgehalts von Cookies. Die Resultate der Web-Tracking Langzeitstudien sind unter anderem 1.209 identifizierte Tracking-Domains auf den meistbesuchten Webseiten in Deutschland. Hierbei wurden innerhalb der Top 25 Webseiten im Durchschnitt 45 Tracking-Elemente pro Webseite gefunden. Der Tracker mit dem höchsten Potenzial zum Erstellen eines Benutzerprofils war doubleclick.com, da er 90% der Webseiten überwacht. Die Auswertung des untersuchten Tracking-Netzwerks ergab weiterhin einen detaillierten Einblick in die Tracking-Technik mithilfe von Weiterleitungslinks. Dabei haben wir 1,2 Millionen HTTP-Traces von monatelangen Crawls der 50.000 international meistbesuchten Webseiten analysiert. Die Ergebnisse zeigen, dass 11,6% dieser Webseiten HTTP-Redirects, verborgen in Webseiten-Links, zum Tracken verwenden. Dies wird eingesetzt, um den Webseitenverlauf des Benutzers nach dem Klick durch eine Kette von (Tracking-)Servern umzuleiten, welche in der Regel nicht sichtbar sind, bevor das beabsichtigte Link-Ziel geladen wird. In diesem Szenario erfasst der Tracker wertvolle Verbindungs-Metadaten zu Inhalt, Thema oder Benutzerinteressen der Website. Die Visualisierung des Tracking Ökosystem stellen wir in einem interaktiven Open-Source Web-Tool bereit. Der dritte wissenschaftliche Beitrag dieser Dissertation ist die Konzeption von zwei neuartigen Schutzmechanismen gegen Web-Tracking und der Aufbau einer automatisierten Simulationsumgebung unter Realbedingungen, um die Effektivität der Umsetzungen zu verifizieren. Der Fokus liegt auf den beiden meist verwendeten Tracking-Verfahren: Cookies (hierbei wird eine eindeutigen ID auf dem Gerät des Benutzers gespeichert), sowie Browser-Fingerprinting. Letzteres beschreibt eine Methode zum Sammeln einer Vielzahl an Geräteeigenschaften, um den Benutzer eindeutig zu (re- )identifizieren, ohne eine eindeutige ID auf dem Gerät zu speichern. Um die Effektivität der in dieser Arbeit entwickelten Schutzmechanismen vor Web-Tracking zu untersuchen, implementierten und evaluierten wir die Schutzkonzepte direkt im Chromium Browser. Das Ergebnis zeigt eine erfolgreiche Reduzierung von Web-Tracking um 44%. Zusätzlich verbessert das in dieser Arbeit entwickelte Konzept “Site Isolation” den Datenschutz des privaten Browsing-Modus, ermöglicht das Setzen eines manuellen Speicher-Zeitlimits von Cookies und schützt den Browser gegen verschiedene Bedrohungen wie CSRF (Cross-Site Request Forgery) oder CORS (Cross-Origin Ressource Sharing). Site Isolation speichert dabei den Status der lokalen Website in separaten Containern und kann dadurch diverse Tracking-Methoden wie Cookies, lokalStorage oder redirect tracking verhindern. Bei der Auswertung von 1,6 Millionen Webseiten haben wir gezeigt, dass der Tracker doubleclick.com das höchste Potenzial besitzt, den Nutzer zu verfolgen und auf 25% der 40.000 international meistbesuchten Webseiten vertreten ist. Schließlich demonstrieren wir in unserem erweiterten Chromium-Browser einen robusten Browser-Fingerprinting-Schutz. Der Test unseres Prototyps mittels 70.000 Browsersitzungen zeigt, dass unser Browser den Nutzer vor sogenanntem Browser-Fingerprinting Tracking schützt. Im Vergleich zu fünf anderen Browser-Fingerprint-Tools erzielte unser Prototyp die besten Ergebnisse und ist der erste Schutzmechanismus gegen Flash sowie Canvas Fingerprinting.
With the frequency and impact of data breaches raising, it has become essential for organizations to automate intrusion detection via machine learning solutions. This generally comes with numerous challenges, among others high class imbalance, changing target concepts and difficulties to conduct sound evaluation. In this thesis, we adopt a user-centered anomaly detection perspective to address selected challenges of intrusion detection, through a real-world use case in the identity and access management (IAM) domain. In addition to the previous challenges, salient properties of this particular problem are high relevance of categorical data, limited feature availability and total absence of ground truth.
First, we ask how to apply anomaly detection to IAM audit logs containing a restricted set of mixed (i.e. numeric and categorical) attributes. Then, we inquire how anomalous user behavior can be separated from normality, and this separation evaluated without ground truth. Finally, we examine how the lack of audit data can be alleviated in two complementary settings. On the one hand, we ask how to cope with users without relevant activity history ("cold start" problem). On the other hand, we seek how to extend audit data collection with heterogeneous attributes (i.e. categorical, graph and text) to improve insider threat detection.
After aggregating IAM audit data into sessions, we introduce and compare general anomaly detection methods for mixed data to a user identification approach, designed to learn the distinction between normal and malicious user behavior. We find that user identification outperforms general anomaly detection and is effective against masquerades. An additional clustering step allows to reduce false positives among similar users. However, user identification is not effective against insider threats. Furthermore, results suggest that the current scope of our audit data collection should be extended.
In order to tackle the "cold start" problem, we adopt a zero-shot learning approach. Focusing on the CERT insider threat use case, we extend an intrusion detection system by integrating user relations to organizational entities (like assignments to projects or teams) in order to better estimate user behavior and improve intrusion detection performance. Results show that this approach is effective in two realistic scenarios.
Finally, to support additional sources of audit data for insider threat detection, we propose a method representing audit events as graph edges with heterogeneous attributes. By performing detection at fine-grained level, this approach advantageously improves anomaly traceability while reducing the need for aggregation and feature engineering. Our results show that this method is effective to find intrusions in authentication and email logs.
Overall, our work suggests that masquerades and insider threats call for different detection methods. For masquerades, user identification is a promising approach. To find malicious insiders, graph features representing user context and relations to other entities can be informative. This opens the door for tighter coupling of intrusion detection with user identities, roles and privileges used in IAM solutions.
New arising phenomena in the occupational realm strongly shape contemporary work settings. These developments heavily affect how individuals work within and beyond organizational boundaries. Two phenomena associated with the changing nature of work have been especially prevalent in work settings and intensively discussed in public debates. First, organizations started to introduce mindfulness practices to their workforce. Rooted in spirituality and formerly used in clinical therapy, mindfulness is applied as a human resource development practice to train employees and managers to cope with the increased work intensification. Second, digitization and the importance of individualization opened up the path for work settings beyond organizational boundaries on crowdworking online platforms. On these online platforms, workers process tasks independently and remotely. Research just started to address the implications and meaning of mindfulness practices in organizations and the rise of crowdworking platforms. Several questions remain unanswered. This dissertation addresses unanswered but pressing questions related to these two phenomena shaping contemporary work settings. Structured in four essays the first two essays address the application and meaning of mindfulness practices. The first essay analyzes the meaning and interpretations of these new practices within organizations. The second essay takes contextual factors of the organizational environment into account and investigates their relevance for the successful implementation of mindfulness practices. The second two essays are dedicated to work attitudes and behavior on crowdworking online platform. Essay three captures individuals’ motivation for working on such platforms and their effects for workers’ work performance. The last essay deals with the role of professional crowdworking online communities in the work experience and asses the effects of social support in these communities on occupational identification, work meaningfulness and finally on work engagement. Each essay in this dissertation generates new insights on arising phenomena in contemporary work settings. They address several timely yet unanswered research questions for these rising phenomena and thereby offer a deeper and more nuanced understanding of the role mindfulness practices and crowdworking online platforms play in the context of the future of work.
Die vergleichende Sportpädagogik als Teildisziplin und zugleich Schnittmenge der vergleichenden Erziehungswissenschaft und der allgemeinen Sportpädagogik macht es sich zur Aufgabe, sportbezogene Charakteristika zweier oder mehrerer Länder oder Kulturen miteinander zu vergleichen und den dadurch entstandenen Erkenntnisgewinn in verschiedensten Sphären nutzbar zu machen – eine Wissenschaftsdisziplin, wie auch Forschungsmethode, deren Möglichkeiten nicht nur für den Sportunterricht per se, sondern auch für die (universitäre) Ausbildung der zukünftigen Arrangeure des Sportunterrichts nutzbar gemacht werden können. Im Rahmen eines melioristisch motivierten Vergleichs macht es sich diese Forschungsarbeit zum Ziel, Unterschiede und Gemeinsamkeiten der universitären Sportlehramtsausbildung in Deutschland und den USA zu eruieren, deren Ursachen zu analysieren und auf dieser Basis Potenziale, Entwicklungsperspektiven und Handlungsoptionen für den positiven Fortschritt beider universitärer Ausbildungssysteme aufzuzeigen. Dabei werden sowohl außensystemische, wie auch innensystemische Untersuchungsaspekte berücksichtigt.
Subjektive Theorien von Grundschullehrkräften über Eltern wurden qualitativ-empirisch erkundet, um einen Beitrag zur Schule-Elternhauskooperation zu leisten. Luhmanns Systemtheorie und Fends Schultheorie bilden die theoretische Basis. Mithilfe von drei Interaktionsfeldern, die entscheidende Begegnungsräume zwischen Grundschullehrkräften und Elternschaft markieren, wurde dem Verhältnis zueinander nachgegangen (Luhmann, 2014; Fend, 2008). Mittels eines halbstandardisierten Leitfadeninterviews und einer adaptierten Struktur-Legetechnik sind die Subjektiven Theorien der Untersuchungsteilnehmer*innen erfasst worden (Scheele & Groeben, 1988). Aus der Perspektive von Lehrkräften wurden die individuellen und generellen subjektiven Theorien herausgearbeitet. Die Ergebnisse offenbaren fachlich fehlerhafte und für Kooperation teilweise äußerst hemmende Überzeugungen, obgleich der Wunsch nach einem kooperativen Miteinander stets betont wurde.
Die vorliegende Arbeit erforscht den Einsatz von Ekphrasen in audiovisuellen Texten. Der ursprünglich aus der antiken Rhetoriklehre stammende Begriff bezeichnet die literarische Beschreibung von bildender Kunst und wird im Rahmen dieser Untersuchung auf den Film übertragen. Ziel der Analyse filmischer Beschreibungen von Kunst, speziell Malerei, ist es zu eruieren, wie Kunstwerke im Sinne der Bedeutungsvermittlung semantisch aufgeladen respektive funktionalisiert werden. Zentrale Forschungsgegenstände bilden demnach die Medien Bild und Film – somit ist die gesamte Arbeit in den Kontext des Intermedialitätsdiskurses eingebettet. Als Korpus dient ein Konglomerat an Texten, das neben zentralen Schlüsselwerken insbesondere jüngere – zwischen 2011 und 2016 entstandene –, wenig bis kaum erforschte Filme umfasst.
The current movement towards a smart grid serves as a solution to present power grid challenges by introducing numerous monitoring and communication technologies. A dependable, yet timely exchange of data is on the one hand an existential prerequisite to enable Advanced Metering Infrastructure (AMI) services, yet on the other a challenging endeavor, because the increasing complexity of the grid fostered by the combination of Information and Communications Technology (ICT) and utility networks inherently leads to dependability challenges.
To be able to counter this dependability degradation, current approaches based on high-reliability hardware or physical redundancy are no longer feasible, as they lead to increased hardware costs or maintenance, if not both. The flexibility of these approaches regarding vendor and regulatory interoperability is also limited. However, a suitable solution to the AMI dependability challenges is also required to maintain certain regulatory-set performance and Quality of Service (QoS) levels.
While a part of the challenge is the introduction of ICT into the power grid, it also serves as part of the solution. In this thesis a Network Functions Virtualization (NFV) based approach is proposed, which employs virtualized ICT components serving as a replacement for physical devices. By using virtualization techniques, it is possible to enhance the performability in contrast to hardware based solutions through the usage of virtual replacements of processes that would otherwise require dedicated hardware. This approach offers higher flexibility compared to hardware redundancy, as a broad variety of virtual components can be spawned, adapted and replaced in a short time. Also, as no additional hardware is necessary, the incurred costs decrease significantly. In addition to that, most of the virtualized components are deployed on Commercial-Off-The-Shelf (COTS) hardware solutions, further increasing the monetary benefit.
The approach is developed by first reviewing currently suggested solutions for AMIs and related services. Using this information, virtualization technologies are investigated for their performance influences, before a virtualized service infrastructure is devised, which replaces selected components by virtualized counterparts. Next, a novel model, which allows the separation of services and hosting substrates is developed, allowing the introduction of virtualization technologies to abstract from the underlying architecture. Third, the performability as well as monetary savings are investigated by evaluating the developed approach in several scenarios using analytical and simulative model analysis as well as proof-of-concept approaches. Last, the practical applicability and possible regulatory challenges of the approach are identified and discussed.
Results confirm that—under certain assumptions—the developed virtualized AMI is superior to the currently suggested architecture. The availability of services can be severely increased and network delays can be minimized through centralized hosting. The availability can be increased from 96.82% to 98.66% in the given scenarios, while decreasing the costs by over 60% in comparison to the currently suggested AMI architecture. Lastly, the performability analysis of a virtualized service prototype employing performance analysis and a Musa-Okumoto approach reveals that the AMI requirements are fulfilled.
Computer vision aims at developing algorithms to extract high-level information from images and videos. In the industry, for instance, such algorithms are applied to guide manufacturing robots, to visually monitor plants, or to assist human operators in recognizing specific components. Recent progress in computer vision has been dominated by deep artificial neural network, i.e., machine learning methods simulating the way that information flows in our biological brains, and the way that our neural networks adapt and learn from experience. For these methods to learn how to accurately perform complex visual tasks, large amounts of annotated images are needed. Collecting and labeling such domain-relevant training datasets is, however, a tedious—sometimes impossible—task. Therefore, it has become common practice to leverage pre-available three-dimensional (3D) models instead, to generate synthetic images for the recognition algorithms to be trained on. However, methods optimized over synthetic data usually suffer a significant performance drop when applied to real target images. This is due to the realism gap, i.e., the discrepancies between synthetic and real images (in terms of noise, clutter, etc.). In my work, three main directions were explored to bridge this gap.
First, an innovative end-to-end framework is proposed to render realistic depth images from 3D models, as a growing number of solutions (especially in the industry) are utilizing low-cost depth cameras (e.g., Microsoft Kinect and Intel RealSense) for recognition tasks. Based on a thorough study of these devices and the different types of noise impairing them, the proposed framework simulates their inner mechanisms, comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc. Able to simulate a wide panel of depth sensors and to quickly generate large datasets, this framework is used to train algorithms for various recognition tasks, consistently and significantly enhancing their performance compared to other state-of-the-art simulation tools.
In some cases, however, relevant 2D or 3D object representations to generate synthetic samples are not available. Considering this different case of data scarcity, a solution is then proposed to incrementally build a representation of visual scenes from partial observations. Provided observations are localized from one to another based on their content and registered in a global memory with spatial properties. Simultaneously, this memory can be queried to render novel views of the scene. Furthermore, unobserved regions can be hallucinated in memory, in consistence with previous observations, hallucinations, and global priors. The efficacy of the proposed mnemonic and generative system, trainable end-to-end, is demonstrated on various 2D and 3D use-cases.
Finally, an advanced convolutional neural network pipeline is introduced, tackling the realism gap from a novel angle. While most methods addressing this problem focus on bringing synthetic samples—or the knowledge acquired from them—closer to the real target domain, the proposed solution performs the opposite process, mapping unseen target images into controlled synthetic domains. The pre-processed samples can then be handed to downstream recognition methods, themselves purely trained on similar synthetic data, to greatly improve their accuracy.
For each approach, a variety of qualitative and quantitative studies are detailed, providing successful comparisons to state-of-the-art methods. By proposing solutions to bridge the realism gap from either side, as well as a pipeline to improve the acquisition and generation of new visual content, this thesis provides a unique perspective on the challenges of data scarcity when building robust recognition systems.
A plethora of resources made available via retrieval systems in digital libraries remains untapped in the so called long tail of the Web. These long-tail websites get considerably less visits than major Web hubs.
Zero-effort queries ease the discovery of long-tail resources by proactively retrieving and presenting information based on a user’s context. However, zero-effort queries over existing digital library structures are challenging, since the underlying retrieval system is only accessible via an API. The information need must be expressed by a query, instead of optimizing the ranking between context and resources in the retrieval system directly. We address three research questions that arise from replacing the user information seeking process by zero-effort queries.
Our first question addresses the transformation of a user query to an automatic query, derived from the context. We present means to 1) identify the relevant context on different levels of granularity, 2) derive an information need from the context via keyword extraction and personalization and 3) express this information need in a query scheme that avoids over- or under-specified queries. We address the cold start problem with an approach to bootstrap user profiles from social media, even for passive users.
With the second question, we address the presentation of resources in zero-effort query scenarios, presenting guidelines for presentation interfaces in the browser and a visualization of the triadic relationship between context, query and results. QueryCrumbs, a compact query history visualization supports recalling information found in the past and exploratory search by visualizing qualitative and quantitative query similarity.
Our last question addresses the gap between (simple) keyword queries and the representation of resources by rich and complex meta-data. We investigate and extend feature representation learning techniques centered around the skip-gram model with negative sampling. Finally, we present an approach to learn representations from network and text jointly that can cope with the partial absence of one modality.
Experimental results show close to human performance of our zero-effort query and user profile generation approach and visualizations to be helpful in terms of transparency, efficiency and support for exploratory search. These results indicate that the proposed zero-effort query approach indeed eases the discovery of long-tail resources and the accompanying visualizations further facilitate this process. The joint representation model provides a first step to bridge the gap between query and resource representation and we plan to follow and investigate this route further in the future.
Whenever software faults can endanger human life, property, or the environment, the absence of faults must be ensured with utmost care and the best technologies available. Evidence is needed showing that all requirements are satisfied and that the risk of faults is reduced. One technique to conduct such a verification task—composed of the software to verify, the specification to check, and a model of the environment—is software model checking.
To conduct a verification task with a model checker, different models of the task are constructed. We distinguish between two types of task models: syntactic task models and semantic task models, which define the respective syntactic structure (control flow) and semantic structure (state transitions, invariants) of the verification task. When constructing such models, we can observe that similar structures and substructures reappear within and among different verification tasks. For example, the same assertions to check can appear in different functions, or the same predicate can be part of different invariants to describe sets of program states. Similarities that appear during the model construction process can be the result of solving similar reasoning problems, often solved using computationally expensive procedures (as typical for model checking), over and over again. Not reusing results of solving similar problems, not having a means for conducting repeated efforts automatically, or not trying to reduce the number of similar reasoning efforts, is a waste of precious resources.
To address these problems, we present a common conceptual and technical foundation for sharing syntactic and semantic task artifacts for reuse, within and among verification runs. Both the syntactic construction of a verification task and the construction of its semantic model—which describes all possible behaviors and states—are covered. We study how commonalities and regularities in the task models can be taken into account to facilitate the process of sharing task artifacts for reuse, and to make the overall verification process more efficient and effective. We introduce abstract transducers as the theoretical foundation of this thesis: a type of finite-state transducers with an inherent notion of abstraction for states, the input alphabet, and its output alphabet. Abstracting these transducers allows us to widen both the set of input words for that they produce output and the sets of output words. Abstract transducers are instantiated as task artifact transducers to map from program structures to task artifacts to share. We show that the notion of abstraction provides a means for increasing the scope for that task artifacts are shared for reuse. We present two instances of task artifact transducers: Yarn transducers and precision transducers. We use Yarn transducers for providing code to weave into the control-flow structure of a computer program, and present the Loom analysis as a means for orchestrating the weaving process. Precision transducers provide a means for sharing abstraction precisions for reuse, thus aid in defining the level of abstraction of a semantic task model. For both types of transducers, we provide empirical evidence on their practical applicability, for example, to verify Linux kernel modules, and show that they can help in increasing the verification performance.
Job Sequencing and Tool Switching Problems with a Generalisation to Non-Identical Parallel Machines
(2020)
Manufacturing tools have been dominating the manufacturing process since the 1960s. The job sequencing and tool switching problem is an NP-hard combinatorial optimization that has first been introduced in the context of flexible manufacturing systems in the late 1980s. Since then, production systems have undisputedly changed and improved but manufacturing tools still dominate manufacturing processes. Production and system operation processes are continuously adjusted and optimised to changing customer requirements. If the product variety requires an increasing number of tools for processing that exceeds the local tool magazine capacity of the manufacturing system, tool switches become necessary. Although tool changing times within a manufacturing centre or cell may nowadays be very small due to the high degree of automation, tool switching within a dynamic production environment is still a time consuming process that must be avoided. In order to minimize the total tool setup time to enhance productivity, the objectives of the basic job sequencing and tool switching problem are to sequence a set of jobs and simultaneously to determine the best tool loading. Therefore, job sequencing and tool switching problems are gaining considerable attention.
Several solution approaches to the standard problem and related versions of the problem exist. The first part of this dissertation assesses the current state-of-the-art of the job sequencing and tool switching problem and provides a classification scheme for literature on the job sequencing and tool switching problem and its variations. Only few authors consider generalisations of the problem because the level of complexity of extended problems is high. A general approach of the job sequencing and tool switching problem with non-identical parallel machines and sequence-dependent setup times is described in this dissertation. A novel mathematical model based on time periods is presented and analysed which can be adapted to different objective functions. The last part of this dissertation is a quantitative evaluation of fast and effective construction heuristics as well as of an iterated local search algorithm tested on a new set of benchmark instances. As such this dissertation provides a broad basis for future evaluations of solution approaches to the job sequencing and tool switching problem with non-identical parallel machines and sequence-dependent setup times as well as a basis for further generalisations of the problem like for example tool availability constraints or tool-size dependent variations.
Research on flipped classroom instruction has substantially advanced in the past ten years. Flipped classroom refers to an instructional approach in which students study educational videos at home and do homework assignments in class. Since an increasing number of teachers wants to adopt the flipped classroom approach in their practice, further research—particularly in the context of secondary education—is clearly required. The two presented studies in this thesis aimed at examining the effectiveness of flipped classroom instruction in secondary education by conducting a meta-analytic synthesis of prior studies and an intervention study with a methodologically new approach. Specifically, the studies investigated whether and under which conditions the flipped classroom approach has a positive impact on student achievement and which learners benefit most from a flipped or video-based classroom.
In the first study, meta-analytic methods were used to examine whether the flipped classroom approach, after controlling for sampling error, positively effects student achievement in secondary education. Effect sizes were calculated for the research designs pre-test-post-test (Time), post-test only (PostOnly) and pre-test-post-test with control group (Treatment). Moreover, the impact of four moderator variables as boundary conditions of flipped classroom effectiveness was estimated: disciplinary field, length of the intervention, use of a quiz and use of a learning management system. The meta-analytical findings for the effect size Treatment confirmed the effectiveness of flipped classroom on student achievement in comparison to traditional instruction (Cohen’s d = 0.42). Moderator analyses on the effect size Time showed stronger effects for subjects in the STEM area (science, technology, engineering, mathematics) than for foreign languages and humanities. The effect sizes were also higher for shorter intervention studies than for longer ones and if quiz at home had been left out. Moderator analyses on the effect sizes PostOnly and Treatment made clear that the effect sizes for intervention studies without a learning management system were higher than with a learning management system.
The second study aimed to compare flipped classroom with other forms of video-based instruction and determine which types of students benefit most from video-based instruction. Thirty-eight school classes with 848 ninth-grade students took part in a quasi-experimental pre-post-test intervention study over the course of four weeks. Two independent variables were completely crossed resulting in four experimental conditions: video (at home vs. in class) and instructional method (student-centred vs. teacher-centred). Multilevel analyses revealed that all four experimental conditions were equally effective in promoting students’ learning gains. At-risk, average and excellent students profited least from video-based instruction. Confident and independent students had the highest learning gains from pre- to post-test. The study constitutes a first step towards a comprehensive evaluation of flipped classroom by using a better-controlled research design and may contribute to a more objective discussion about the positive effects of flipped classroom.
Abstract concepts and ideas from Computer Science Education can benefit from immersive visualizations that can be provided in virtual environments. This thesis explores the effects of the key characteristics of virtual environments, immersion and presence, on learning outcomes in Educational Virtual Environments for learning Computer Science.
Immersion is a quantifiable description of the technology to immerse the user into the virtual environment; presence describes the subjective feeling of 'being there'. While technological immersion can be seen as a strong predictor for presence, motivational traits, cognition, and the emotional state of the user also influence presence. A possible localization of these technological and person-specific variables in Helmke's pedagogical supply-use framework is introduced as the Educational Framework for Immersive Learning (EFiL). Presence is emphasized as a central criterion influencing immersive learning processes. The EFiL provides an educational understanding of immersive learning as learning activities initiated by a mediated or medially enriched environment that evokes a sense of presence.
The idea of Computer Science Unplugged is pursued by using Virtual Reality technology in order to provide interactive virtual learning experiences that can be accurately displayed, schematizing, substantiating, or metaphorical. For exploring the effects of virtual environment characteristics on learning, the idea of Computer Science Replugged focuses 'hands-on' activities and combines them with immersive technology. By providing a perception of non-mediation, Computer Science Replugged might enable experiences that can contribute additional possibilities to the real activity or enable new activities for teaching Computer Science.
Three game-based Educational Virtual Environments were developed as treatments: 'Bill's Computer Workshop' introduces the components of a computer; 'Fluxi's Cryptic Potions' uses a metaphor to teach asymmetric encryption; 'Pengu's Treasure Hunt' is an immersive visualization of finite state machines. A first study with 23 middle school students was conducted to test the instruments in terms of selectivity, the devices' induced levels of presence, and adequacy of the selected learning objectives. The second study with 78 middle school students playing the environments on different devices (laptop, Mobile Virtual Reality, or head-mounted-display) assessed motivational, cognitive, and emotional factors, as well as presence and learning outcomes.
An overall analysis showed that pre-test performance, presence, and the previous scholastic performance in Maths and German predict the learning outcomes in the virtual environments. Presence could be predicted by the student's positive emotions and by the technological immersion. The level of immersion had no significant effect on learning outcomes. While a good-fitting path analysis model indicated that the assumed relations deriving from the EFiL are largely correct for 'Bill's Computer Workshop' and 'Fluxi's Cryptic Potions', not all results of the overall path analysis were significant for the analyses of the particular environments.
Presence seems to have a small effect on learning outcomes while being influenced by technological and emotional factors. Even though the level of immersion can be used to predict the level of presence, it is not an appropriate predictor for learning outcomes. For future studies, the questionnaires have to be revised as some of them suffered from poor scale reliabilities. While the second study could provide indications that the localization of presence and immersion in an existing educational supply-use framework seems to be appropriate, many factors had to be blanked out.
The thesis contributes to existing research as it adds factors that are crucial for learning processes to the discussion on immersive learning from an educational perspective and assesses these factors in hands-on activities in Educational Virtual Environments for Computer Science Education.
Innovate with Crowds. Co-Creation and Idea Evaluation in Internal and External Crowdsourcing.
(2020)
Crowdsourcing seems to be a promising approach for organizations to overcome challenges widely discussed in innovation and organizational research. However, the extent to which an organization can leverage the benefits from crowdsourcing is contingent on which type of crowd is addressed and how crowds are used. Based on unique data from crowdsourcing contests, the dissertation provides insights how to innovate with internal and external crowds in order to utilize their potential for co-creation and idea evaluation.
Datenjournalismus: Eine Dekonstruktion aus feldtheoretischer und techniksoziologischer Perspektive
(2020)
Das Ziel der vorliegenden Arbeit ist es, mittels dreier Publikationen Forschungslücken punktuell zu schließen, mit der Absicht, zu einer vertieften Beschreibung des Datenjournalismus und zur Theoriebildung beizutragen. Das Konterkarieren mit einschlägigen Studien und die Einordung der Befunde der Publikationen mithilfe Bourdieus (1976) Habitus-Feld-Theorie sowie der Akteur-Netzwerk Theorie (Latour, 2005) als Detailperspektive zielen darauf ab, Erklärungs- und Beschreibungsansätze zu liefern, um mögliche Implikationen für das journalistische Feld darzustellen.
GESTUFTE STANDARDS FÜR DIE ENTWICKLUNG VON KOMPETENZEN IN DER LEHRERBILDUNG (22)
Dimension 1: Gestaltung der Lehrerrolle
Dimension 2: Schule als Lern- und Lebensraum
Dimension 2.1: Schule als Organisation
Dimension 2.2: Schulalltag und Schulleben
Dimension 2.3: Unterrichtsbeobachtung und -evaluation
Dimension 2.4: Entwicklung von Schule
Dimension 3: Unterrichtsplanung, -durchführung und -analyse
Dimension 3.1: Struktur von Unterricht
Dimension 3.2: Lehrformen und -theorien
Dimension 3.3: Lernumgebungen gestalten
Dimension 3.4: Zeitmanagement
Dimension 3.5: Planungsmittel
Dimension 3.6: Medieneinsatz
Dimension 3.7: Sozialformen und Methoden
Dimension 3.8: Umgang mit Heterogenität
Dimension 4: Klassenführung
Dimension 5: Lernprozess- und Lernproduktdiagnostik
Dimension 6: Beratung
Das Hauptanliegen dieser Dissertation ist es einerseits, zu untermauern, dass sich mit der niederösterreichischen „Musiklandschaft“ ein überregional relevantes Erfolgsmodell kultureller Entwicklungs- und Aufbauarbeit präsentiert, und andererseits zu erforschen, welche konkreten Rahmenbedingungen und Erfolgsfaktoren zu diesem positiven Befund beigetragen haben. Die zentrale Frage in diesem Zusammenhang lautet: Wie war es möglich, ausgerechnet in Niederösterreich, dem rund um die österreichische Hauptstadt mit ihrem intensiven Kulturangebot und international beachteten Musikleben gelegenen Bundesland, eine eigenständige und ebenfalls hochwertige Musikszene und Festivaldichte – zwar im Spannungsfeld Wiens aber dennoch unabhängig von den Musikaktivitäten der Hauptstadt – zu entwickeln und langfristig zu etablieren?
Main memory forensics and its special form, virtual machine introspection (VMI), are powerful tools for digital forensics and can be used to improve the security of computer-based systems. However, their use in production systems is often not possible. This work identifies the causes and offers practical solutions to apply these techniques in cloud computing and on mobile devices to improve digital forensics and incident analysis.
Four key challenges must be tackled. The first challenge is that many existing solutions are not reproducible, for example, because the corresponding software components are not available, obsolete or incompatible. The use of these tools is also often complex and can lead to a crash of the system to be monitored in case of incorrect use. To solve this problem, this thesis describes the design and implementation of Libvmtrace, which is a framework for the introspection of Linux-based virtual machines. The focus of the developed design is to implement frequently used methods in encapsulated modules so that they are easy for developers to use, optimize and test.
The second challenge is that many production systems do not provide an interface for main memory forensics and virtual machine introspection. To address this problem, this thesis describes possible solutions for how such an interface can be implemented on mobile devices and in cloud environments designed to protect main memory from unprivileged access. We discuss how cold boot attacks, the ARM TrustZone and the hypervisor of cloud servers can be used to acquire data from storage.
The third challenge is how to reconstruct information from main memory efficiently. This thesis describes how these questions can be solved by employing two practical examples. The first example involves extracting the keys of encrypted TLS connections from the main memory of applications to decrypt network traffic without affecting the performance of the monitored application. The TLSKex and DroidKex architecture describe two approaches to localize the keys efficiently with the help of semantic knowledge in the main memory of applications. The second example discusses how to monitor and document SSH sessions of potential attackers from outside of a virtual machine. It is important that the monitoring routines are not noticed by an attacker. To achieve this, we evaluate how to optimize the performance of the monitoring mechanism.
The fourth challenge is how to deal with the performance degradation caused by introspection in productive systems. This thesis discusses how this can be achieved using the example of a SIEM system. To reduce the performance overhead, we describe how to configure the monitoring routine to collect only the information needed to detect incidents. Also, we describe two approaches that permit the monitoring routine to be dynamically adjusted at runtime to extract more information if necessary so that incidents can be better analyzed.
Der #relichat ist der wöchentlich stattfindende Twitter-Chat zur Religionspädagogik. In einer einstündigen, durch Fragen strukturierten Diskussion werden religionspädagogische Themen auf der Plattform Twitter unter Verwendung des Hashtags #relichat öffentlich diskutiert. Von 2017 bis zum Sommer 2020 fanden 89 #relichats statt, an denen sich etwa 220 Personen aus dem deutschsprachigen Raum aktiv beteiligt haben.
Die vorliegende Dissertation untersucht das Projekt #relichat hermeneutisch und evaluiert die Erfahrungen der Teilnehmer*innen am #relichat als informelles Fortbildungsformat.
Titel der Dissertation:
#relichat - informelles Lernen mit Twitter. Religionslehrer*innenfortbildung als sozial-konstruktivistische Vernetzung in Communities of Practice
Im hermeneutischen Teil der Arbeit werden die Aspekte der sozialen Medien, des informellen und konstruktivistischen Lernens, des vernetzten Lernens in Communities of Practice, des öffentlichen Lernens, aber auch des Selbstverständnisses der Kirche als Institution in der Öffentlichkeit erörtert.
Für die Evaluation wurde ein Mixed Methods-Design gewählt, welches das Projekt aus verschiedenen Positionen betrachtet und damit der Komplexität des Unterfangens gerecht wird. Neben der Betrachtung statistischer Daten (Zugriffszahlen, Beteiligung etc.), der Analyse der thematischen Entfaltung der Diskussionen nach dem Konzept der Themenkonstitution, der Analyse der Diskussionen nach der Methode der Textlinguistik, bildete die Evaluation qualitativer Interviews mit Beteiligten am #relichat das Zentrum der Forschung. Mit Hilfe der Methodologie der Grounded Theory wurde eine Theorie mit Verallgemeinerungsanspruch für das vernetzte Lernen mit sozialen Medien entwickelt, die über das konkrete Projekt #relichat hinaus reichen soll.
Als Ergebnis lässt sich zusammenfassen:
Lernen in und mit dem #relichat ist konstruktivistisches, informelles, selbstorganisiertes und selbstverantwortliches Lernen. Es kann als Fortbildung bezeichnet werden, insofern es eine Weiterentwicklung der eigenen religionspädagogischen Praxis bewirken kann. Eine besondere Rolle spielen die sozialen Beziehungen in der Community of Practice. Das Medium Twitter gibt die Rahmenbedingungen der Kommunikation vor.
Aufgrund der Erkenntnisse aus dem Forschungsprojekt #relichat ist davon auszugehen, dass es in Zukunft verstärkt Formen des informellen Lernens in Communities of Practice geben wird, die sich die Möglichkeiten sozialer Kommunikation in digitalen Medien zunutze machen werden. Das gilt für lebenslanges Lernen grundsätzlich, aber auch für Pädagog*innenfortbildung. Vernetzung wird als Ressource für Weiterbildung noch bedeutsamer werden.
Es werden in Zukunft Dialog und Kommunikation vermehrt in der Öffentlichkeit stattfinden, hierarchische Strukturen werden dadurch ihre gesellschaftliche Legitimation zunehmend verlieren.
Investment fraud, cybercrime, inconsistencies in health care or the emission scams at the car manufacturers, economic crime (fraud) manifests itself in many facets. For Germany, the cases of FlowTex, Comroad, HRE-Bad-Bank, Holzmann, Volkswagen and the current fraud suspicions at Porsche AG are prominent examples with mostly appalling consequences (Ballwieser and Dobler 2003; Kögler 2015; Meck, Nienhaus, and von Petersdorff 2011; Peemöller and Hofmann 2005). Nevertheless, newspapers without reports on fraud have become scarce. Headlines such as: "Corruption - the daily business" impress hardly anyone, not least because of their certain regularity. The cases revealed publicly are, however, only the tip of the iceberg, as reported by renowned experts (Bundeskriminalamt 2018; LKA 2018). Currently, the State Criminal Police Office (Landeskriminalamt (LKA)) of Baden-Württemberg and its department for economic and environmental crime and corruption is concerned with 72 major proceedings (LKA 2018). However, fraud could be avoided or at least contained by appropriate preventive measures (Bundeskriminalamt 2018; Bussmann 2004; Hlavica, Klapproth, and Hülsberg 2011). Consequently, the pressure on companies and employees to demonstrate compliant and ethical behavior and to meet the demands of stakeholders at all times within their business activities has grown (Buff 2000). This raises the question about which precautionary measures a company can and must implement (Weick and Sutcliffe 2015). Although corporate awareness of this issue has increased, most in-house detection of fraud is accidental, suggesting that companies are still lacking appropriately functioning and systematic (early) detection mechanism (Hlavica et al. 2011). If a company is accused of fraud, this usually has serious repercussions on its corporate reputation. Prior research found that capital market reputation-based penalties for affected companies are on average 7.5 times higher than penalties imposed by the legal system (Karpoff, Lee, and Martin 2008). Furthermore, the accusation of fraud also affects the external auditor’s reputation, since lacking the detection of manipulations in clients’ (financial) reports not only damages public confidence in the accuracy of firms’ financial statements but also in the reliability of the auditor's report. Therefore, it is not surprising that the demand for greater supervision and control of firms’ (financial) reporting as well as for reliable work of statutory auditors continually increases (Herkendell 2007). Although to a lesser extent, this is also the case for the determination of material (accounting) errors within a firm’s financial statements, which are often difficult to distinguish from accounting fraud. According to the International Accounting Standard (IAS) 8.5, published by the International Accounting Standards Board (IASB), errors are omissions and/or misstatements of items that result from the nonapplication or misapplication of trusted information (IASB 2003). Thus, accounting errors and accounting fraud both result in incorrect information of a firm’s financial reports and consequently affect stakeholders’ decision-making. One resulting attempt in counteracting the broad demand for appropriate protective measures was the implementation of a two-stage enforcement system involving the German Financial Reporting Enforcement Panel (Deutsche Prüfstelle für Rechnungslegung (DPR)) as part of the adopted Financial Reporting Enforcement Act (Bilanzkontrollgesetz (BilKoG)) in 2004. The primary objective of the Federal Government's implementation of this mechanism was to strengthen investors' lost confidence in the German capital market, the information content of financial reporting, and Germany as a financial center in the international competition. In addition, the enforcement system serves as a sanctioning instrument for firms in the event of an error detection and subsequent adverse error disclosure via the German federal registry (elektronischer Bundesanzeiger). This adverse error disclosure not only sanctions denounced firms but also questions the quality of the annual financial statement audit and thus the quality of the responsible audit firm. Hence, the often thin line between firms’ unintentional accounting errors, purposive engagement in earnings management, and intentional fraud in particular presents an increasing challenge for the audit profession.
The objective of my cumulative dissertation is to provide a comprehensive overview of fraud and forensic accounting as well as insights into the distinct dimensions among the concepts of errors, earnings management and fraud from a German accounting perspective. I aim at achieving this objective in three steps: First (1), by providing an overview of discipline-specific education possibilities, existing forensic accounting practices, institutions, and current developments in research. Second (2), by assessing auditors’ obligations and responsibilities for the detection of irregularities within the scope of the annual financial statement audit and whether including forensic services into the service portfolio of audit firms can help increase their audit quality due to spillover effects. Third (3), by examining firms’ reputation (re-)building management in response to financial violations and how this process is associated with managing multiple (stakeholder) reputations. This dissertation is composed of three individual papers whereby each considers one of the above outlined focus areas
Ziel der Arbeit ist es, die Formen der Einmündung Jugendlicher in rechtsextreme Szenen sowie die hierbei ablaufenden sozialen Prozesse zu beleuchten. Der interkulturelle Vergleich zwischen Deutschland und den USA unterstützt die Erkenntnis, dass hierbei in erster Linie keine Länderspezifika, sondern Rekrutierungsstrategien rechter Szenen relevant sind, welche auf ähnliche Weise an Faktoren ansetzen, die in beiden Ländern vorzufinden sind.