000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Year of publication
Document Type
- Article (184)
- Doctoral Thesis (103)
- Conference Proceeding (17)
- Report (17)
- Book (7)
- PeriodicalPart (7)
- Other (3)
- Working Paper (3)
- Habilitation (1)
- Master's Thesis (1)
Language
- English (291)
- German (51)
- Multiple languages (1)
Has Fulltext
- yes (343)
Keywords
- - (25)
- deep learning (18)
- machine learning (14)
- Machine learning (9)
- Image processing (5)
- inertial measurement unit (5)
- Betriebssystem (4)
- Computational science (4)
- GPU (4)
- Optimierung (4)
Institute
- Technische Fakultät (220)
- Department Informatik (42)
- Rechts- und Wirtschaftswissenschaftliche Fakultät (19)
- Fakultätsübergreifend / Sonstige Einrichtung -ohne weitere Spezifikation- (13)
- Fachbereich Wirtschaftswissenschaften (8)
- Medizinische Fakultät (7)
- Naturwissenschaftliche Fakultät (5)
- Philosophische Fakultät und Fachbereich Theologie (5)
- Institut für Medizininformatik, Biometrie und Epidemiologie (3)
- Medizinische Fakultät -ohne weitere Spezifikation- (3)
Executing system calls requires switching the privilege level when entering the kernel. This switch is considered essential to secure the operating system from unwanted access but is also costly. In other contexts, on-demand rather than proactive action allows paying overhead costs only when necessary. For example, non-blocking synchronization forgoes proactive locking in favor of retries if a conflict occurs.
We propose that single-address-space operating systems allow for an on-demand approach for system calls. When the operating-system data is hidden by the sheer size of the address space, elevated privileges are only necessary when performing privileged hardware operations. We believe that single-address-space systems would benefit from such an on-demand approach.
We introduce a concept which allows to defer the privilege-level switch until it is actually needed, that is when the first privileged instruction is encountered. We present a prototype implementation of the concept as an adaption of the Linux kernel as a widespread single-address-space kernel does not exist.
This thesis studies the interactions between the theory of graded monads, coalgebras, and the semantics of concurrent systems. On a more concrete level, the results in this thesis contribute to the categorical framework of graded semantics, which combines graded monads with (universal) coalgebra to yield a uniform setting for the analysis of Linear-time–Branching-time style spectra in the sense of Van Glabbeek [34]; i.e. the system type and granularity of semantics are both parameters of the framework. Instances on labelled transition systems range from fine-grained equivalences such as bisimilarity to more coarse-grained ones like trace equivalence. While existing work has primarily focused on graded semantics in the category Set of sets and functions [25,70] (corresponding to behavioural equivalences), we lift the overall framework to categories of Horn-definable relational structures (e.g. partially ordered sets and metric spaces). In this setting, the relational type of semantics (e.g. equivalences, preorders, metrics) is an additional parameter. We focus on graded behavioural equivalences and preorders. The core contributions of this thesis are then the development of algebraic, logical, and game-theoretical tools for the analysis of graded semantics in (restrictions of) the described setting, as well as a generic coalgebraic determinization construction under graded behavioural equivalences.
<title xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Abstract</title><p xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xml:lang="en">3D reconstruction of deformable (or <italic>non‐rigid</italic>) scenes from a set of monocular 2D image observations is a long‐standing and actively researched area of computer vision and graphics. It is an ill‐posed inverse problem, since—without additional prior assumptions—it permits infinitely many solutions leading to accurate projection to the input 2D images. Non‐rigid reconstruction is a foundational building block for downstream applications like robotics, AR/VR, or visual content creation. The key advantage of using monocular cameras is their omnipresence and availability to the end users as well as their ease of use compared to more sophisticated camera set‐ups such as stereo or multi‐view systems. This survey focuses on state‐of‐the‐art methods for dense non‐rigid 3D reconstruction of various deformable objects and composite scenes from monocular videos or sets of monocular views. It reviews the fundamentals of 3D reconstruction and deformation modeling from 2D image observations. We then start from general methods—that handle arbitrary scenes and make only a few prior assumptions—and proceed towards techniques making stronger assumptions about the observed objects and types of deformations (e.g. human faces, bodies, hands, and animals). A significant part of this STAR is also devoted to classification and a high‐level comparison of the methods, as well as an overview of the datasets for training and evaluation of the discussed techniques. We conclude by discussing open challenges in the field and the social aspects associated with the usage of the reviewed methods.</p>
Strategy practitioners and executives in supply chains are facing increasingly dynamic environments in which their firms operate. Besides disruptive events such as the covid-19 pandemics, technological progress is shaping the environment of supply chains and is changing the requirements of supply chain interactors. In pursuit of a competitive company performance in such environments, companies need to react to technological changes to drive customer service, operational excellence and to leverage data to improve supply chain operations. Since technology has been identified as a driver in leveraging these effects, blockchain technology – besides others such as Internet of Things, Artificial Intelligence or Big Data Analytics – has garnered increasing attention from both academics and practitioners. From practitioners, Gartner mentioned the technology three times in a row from 2017-2019 as a major trend of supply chain tech. From an academic angle, initial scholars started to investigate potential blockchain use cases in supply chains in 2016, working towards the effects for more transparent, reliable, and efficient supply chain management.
Even though blockchain technology itself relates to promising concepts and ideas, to this day, the technology remains more on a theoretical level with only few test applications in real world supply chains. Against this background, this dissertation sets the foundation to answer open questions and to provide guidance on challenges that executives and strategy practitioners face to understand and to adapt blockchain technology in supply chains. The individual articles comprising this work provide an overview and then delve deeper into individual core elements of blockchain technology in supply chains. Therefore, the overall approach streamlines the existing academic research in different formats, comprising the existing research status and extending it by highlighting the dark spots of the existing, theoretical foundation. In addition to the overall approach, one article examines how executives and strategy practitioners perceive the future of supply chain management under the development of technology in the environment of freight forwarding.
The first research paper “Blockchain technology in logistics and supply chain management – a bibliometric literature review from 2016 to January 2020” explores and maps the existing literature of Blockchain technology in supply chain management. The study summarizes 613 articles from academic supply chain research. The paper represents the first bibliometric analysis in this space, laying the foundation for newcomers to the topic by streamlining the research agenda. The study supports interested parties by creating an understanding of the most relevant developments, identifying the most influential works and by understanding trends in Blockchain research within the domain of supply chain management. The study adopts a citation network analysis and a co-citation analysis. Based on a co-citation analysis, the paper classifies the existing literature into five different research clusters: 1) theoretical sensemaking, 2) conceptualising and testing blockchain applications, 3) framing blockchain into supply chains, 4) the technical design of blockchain applications for real world supply chain applications and 5) the role of blockchain technology within digital supply chains. The study applies a practitioners relevance classification, providing a taxonomy for practitioners to aim into the most relevant studies in their respective fields. Further, the study highlights the most influential works of the space, providing newcomers with an easy entry into the topic.
The second essay focusses on the freight forwarding industry, which plays a key role in running global supply chains, with an expected sales revenue of 170$ billion in 2021. With the challenges of Covid-19 and massive supply chain disruptions, particularly in global trade, the thriving topic of digitization in supply chains has gained additional attention in both the academic and the practitioner world. Visibility- and knowledge gaps on the status of products have become a challenge for both freight forwarders and their customers, asking for a change in the organization of freight forwarders and how they interact with technology. Since technological foresight studies for the freight forwarding space are scarce, the specific expected impacts of digitization in freight forwarding remain unrevealed. The aim of this essay is to examine upcoming changes in the freight forwarding industry expected by freight forwarding professionals and academics until 2051 against the background of current technological developments such as blockchain technology. Overall, 84 international experts shared their estimates through a Delphi survey. The results are clustered into four clusters that provide an outlook into the future of freight forwarding. The results show that freight forwarding organizations should adapt to developing customer expectations and a change in the freight forwarder profile due to new requirements. Further, human resource activities should be focused on the upcoming change in the industry, where the “war for talent” is expected to evolve into a “war for digital talent”. Still, experts see a high relevancy for freight forwarding services in the future despite all technological developments and do not see technology as a threat to the freight forwarding industry, but as an opportunity. With regards to blockchain technology, the experts state a limited understanding of the technology compared to other tech concepts such as artificial intelligence and big data analytics. The study supports practitioners as well as academics in understanding future scenarios, creates an outlook for the industry and identifies trends within the domain of freight forwarding.
The third essay focusses on the most relevant consumer vertical where trust plays a key role for end-customers. Since practitioners are experiencing an increasing complexity and lack of transparency of literature, it becomes difficult for managers to find easy access into the subject. Thus, the paper adopts a structured literature review approach, streamlining the existing landscape of 31 peer-reviewed articles of blockchain in the food sector. It creates an entry point into the issue of Blockchain in the food supply chain domain and provides guidance for supply chain managers on “where to start” in the academic literature. The findings streamline the existing literature into four different research streams as follows: 1) implementation review and adoption 2) technology application and use cases, 3) theoretical frameworks, 4) data integrity and -security. The study will support interested parties in creating an understanding of the most relevant developments and to understand trends in blockchain research within the domain of food supply chain management.
Pulmonary hemorrhage (P-Hem) occurs among multiple species and can have various causes. Cytology of bronchoalveolar lavage fluid (BALF) using a 5-tier scoring system of alveolar macrophages based on their hemosiderin content is considered the most sensitive diagnostic method. We introduce a novel, fully annotated multi-species P-Hem dataset, which consists of 74 cytology whole slide images (WSIs) with equine, feline and human samples. To create this high-quality and high-quantity dataset, we developed an annotation pipeline combining human expertise with deep learning and data visualisation techniques. We applied a deep learning-based object detection approach trained on 17 expertly annotated equine WSIs, to the remaining 39 equine, 12 human and 7 feline WSIs. The resulting annotations were semi-automatically screened for errors on multiple types of specialised annotation maps and finally reviewed by a trained pathologist. Our dataset contains a total of 297,383 hemosiderophages classified into five grades. It is one of the largest publicly available WSIs datasets with respect to the number of annotations, the scanned area and the number of species covered.
How does the mind organize thoughts? The hippocampal-entorhinal complex is thought to support domain-general representation and processing of structural knowledge of arbitrary state, feature and concept spaces. In particular, it enables the formation of cognitive maps, and navigation on these maps, thereby broadly contributing to cognition. It has been proposed that the concept of multi-scale successor representations provides an explanation of the underlying computations performed by place and grid cells. Here, we present a neural network based approach to learn such representations, and its application to different scenarios: a spatial exploration task based on supervised learning, a spatial navigation task based on reinforcement learning, and a non-spatial task where linguistic constructions have to be inferred by observing sample sentences. In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations. Furthermore, the resulting neural firing patterns are strikingly similar to experimentally observed place and grid cell firing patterns. We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
In systems involving quantitative data, such as probabilistic, fuzzy, or metric systems, behavioural distances provide a more fine-grained comparison of states than two-valued notions of behavioural equivalence or behaviour inclusion. Like in the two-valued case, the wide variation found in system types creates a need for generic methods that apply to many system types at once. Approaches of this kind are emerging within the paradigm of universal coalgebra, based either on lifting pseudometrics along set functors or on lifting general real-valued (fuzzy) relations along functors by means of fuzzy lax extensions. An immediate benefit of the latter is that they allow bounding behavioural distance by means of fuzzy (bi-)simulations that need not themselves be hemi- or pseudometrics; this is analogous to classical simulations and bisimulations, which need not be preorders or equivalence relations, respectively. The known generic pseudometric liftings, specifically the generic Kantorovich and Wasserstein liftings, both can be extended to yield fuzzy lax extensions, using the fact that both are effectively given by a choice of quantitative modalities. Our central result then shows that in fact all fuzzy lax extensions are Kantorovich extensions for a suitable set of quantitative modalities, the so-called Moss modalities. For nonexpansive fuzzy lax extensions, this allows for the extraction of quantitative modal logics that characterize behavioural distance, i.e. satisfy a quantitative version of the Hennessy-Milner theorem; equivalently, we obtain expressiveness of a quantitative version of Moss' coalgebraic logic. All our results explicitly hold also for asymmetric distances (hemimetrics), i.e. notions of quantitative simulation.
With the rise and ever-increasing potential of deep learning techniques in recent years, publicly available medical datasets became a key factor to enable reproducible development of diagnostic algorithms in the medical domain. Medical data contains sensitive patient-related information and is therefore usually anonymized by removing patient identifiers, e.g., patient names before publication. To the best of our knowledge, we are the first to show that a well-trained deep learning system is able to recover the patient identity from chest X-ray data. We demonstrate this using the publicly available large-scale ChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images from 30,805 unique patients. Our verification system is able to identify whether two frontal chest X-ray images are from the same person with an AUC of 0.9940 and a classification accuracy of 95.55%. We further highlight that the proposed system is able to reveal the same person even ten and more years after the initial scan. When pursuing a retrieval approach, we observe an mAP@R of 0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to 0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks on external datasets such as CheXpert and the COVID-19 Image Data Collection. Based on this high identification rate, a potential attacker may leak patient-related information and additionally cross-reference images to obtain more information. Thus, there is a great risk of sensitive content falling into unauthorized hands or being disseminated against the will of the concerned patients. Especially during the COVID-19 pandemic, numerous chest X-ray datasets have been published to advance research. Therefore, such data may be vulnerable to potential attacks by deep learning-based re-identification algorithms.
Due to morphological similarities, the differentiation of histologic sections of cutaneous tumors into individual subtypes can be challenging. Recently, deep learning-based approaches have proven their potential for supporting pathologists in this regard. However, many of these supervised algorithms require a large amount of annotated data for robust development. We present a publicly available dataset of 350 whole slide images of seven different canine cutaneous tumors complemented by 12,424 polygon annotations for 13 histologic classes, including seven cutaneous tumor subtypes. In inter-rater experiments, we show a high consistency of the provided labels, especially for tumor annotations. We further validate the dataset by training a deep neural network for the task of tissue segmentation and tumor subtype classification. We achieve a class-averaged Jaccard coefficient of 0.7047, and 0.9044 for tumor in particular. For classification, we achieve a slide-level accuracy of 0.9857. Since canine cutaneous tumors possess various histologic homologies to human tumors the added value of this dataset is not limited to veterinary pathology but extends to more general fields of application.
We provide a generic algorithm for constructing formulae that distinguish behaviourally inequivalent states in systems of various transition types such as nondeterministic, probabilistic or weighted; genericity over the transition type is achieved by working with coalgebras for a set functor in the paradigm of universal coalgebra. For every behavioural equivalence class in a given system, we construct a formula which holds precisely at the states in that class. The algorithm instantiates to deterministic finite automata, transition systems, labelled Markov chains, and systems of many other types. The ambient logic is a modal logic featuring modalities that are generically extracted from the functor; these modalities can be systematically translated into custom sets of modalities in a postprocessing step. The new algorithm builds on an existing coalgebraic partition refinement algorithm. It runs in time O((m+n)logn) on systems with n states and m transitions, and the same asymptotic bound applies to the dag size of the formulae it constructs. This improves the bounds on run time and formula size compared to previous algorithms even for previously known specific instances, viz. transition systems and Markov chains; in particular, the best previous bound for transition systems was O(mn).
The melting of ice sheets and glaciers is one of the main contributors to global sea-level rise. Hence, continuous monitoring of glacier changes and in particular the mapping of positional changes of their calving front is of significant importance. This delineation process, in general, has been carried out manually, which is time-consuming and not feasible for the abundance of available data within the past decade. Automatic delineation of the glacier fronts in synthetic aperture radar (SAR) images can be performed using deep learning-based U-Net models. This article aims to study and survey the components of a U-Net model and optimize the model to get the most out of U-Net for glacier (calving front) segmentation. We trained the U-Net to segment the SAR images of Sjogren-Inlet and Dinsmoore–Bombardier–Edgworth glacier systems on the Antarctica Peninsula region taken by ERS-1/2, Envisat, RadarSAT-1, ALOS, TerraSAR-X, and TanDEM-X missions. The U-Net model was optimized in six aspects. The first two aspects, namely data preprocessing and data augmentation, enhanced the representation of information in the image. The remaining four aspects optimized the feature extraction of U-Net by finding the best-suited loss function, bottleneck, normalization technique, and dropouts for the glacier segmentation task. The optimized U-Net model achieves a dice coefficient score of 0.9378 with a 20% improvement over the baseline U-Net model, which achieved a score of 0.7377. This segmentation result is further postprocessed to delineate the calving front. The optimized U-Net model shows 23% improvement in the glacier front delineation compared to the baseline model.
Simulating mobile liquid–gas interfaces with the free-surface lattice Boltzmann method (FSLBM) requires frequent re-initialization of fluid flow information in computational cells that convert from gas to liquid. The corresponding algorithm, here referred to as the refilling scheme, is crucial for the successful application of the FSLBM in terms of accuracy and numerical stability. This study compares five refilling schemes that extract information from the surrounding liquid and interface cells by averaging, extrapolating, or assuming one of the three different equilibrium states. Six numerical experiments were performed, covering a broad spectrum of possible scenarios. These include a standing gravity wave, a rectangular and cylindrical dam break, a Taylor bubble, a drop impact into liquid, and a bubbly plane Poiseuille flow. In some simulations, the averaging, extrapolation, and one equilibrium-based scheme were numerically unstable. Overall, the results have shown that the simplest equilibrium-based scheme should be preferred in terms of numerical stability, computational cost, accuracy, and ease of implementation.
Target localization and classification from radar point clouds is a challenging task due to the inherently sparse nature of the data with highly non-uniform target distribution. This work presents HARadNet, a novel attention based anchor free target detection and classification network architecture in a multi-task learning framework for radar point clouds data. A direction field vector is used as motion modality to achieve attention inside the network. The attention operates at different hierarchy of the feature abstraction layer with each point sampled according to a conditional direction field vector, allowing the network to exploit and learn a joint feature representation and correlation to its neighborhood. This leads to a significant improvement in the performance of the classification. Additionally, a parameter-free target localization is proposed using Bayesian sampling conditioned on a pre-trained direction field vector. The extensive evaluation on a public radar dataset shows an substantial increase in localization and classification performance.
Jet streams are fast three-dimensional coherent air flows that interact with other atmospheric structures such as warm conveyor belts (WCBs) and the tropopause. Individually, these structures have a significant impact on the midlatitude weather evolution, and the impact of their interaction is still a subject of research in the atmospheric sciences. A first step towards a deeper understanding of the meteorological processes is to extract the geometry of jet streams, for which we develop an integration-based feature extraction algorithm. Thus, rather than characterizing jet core line purely as extremal line structure of wind magnitude, our core-line definition includes a regularization to favor jet core lines that align with the wind vector field. Based on the line geometry, proximity-based filtering can automatically detect potential interactions between WCBs and jets, and results of an automatic detection of split and merge events of jets can be visualized in relation to the tropopause. Taking ERA5 reanalysis data as input, we first extract jet stream core lines using an integration-based predictor–corrector approach that admits momentarily weak air streams. Using WCB trajectories and the tropopause geometry as context, we visualize individual cases, showing how WCBs influence the acceleration and displacement of jet streams, and how the tropopause behaves near split and merge locations of jets. Multiple geographical projections, slicing, as well as direct and indirect volume rendering further support the interactive analysis. Using our tool, we obtained a new perspective on the three-dimensional jet movement, which can stimulate follow-up research.
Background: Fitness trackers and smart watches are frequently used to collect data in longitudinal medical studies. They allow continuous recording in real-life settings, potentially revealing previously uncaptured variabilities of biophysiological parameters and diseases. Adequate device accuracy is a prerequisite for meaningful research.
Objective: This study aims to assess the heart rate recording accuracy in two previously unvalidated devices: Fitbit Charge 4 and Samsung Galaxy Watch Active2.
Methods: Participants performed a study protocol comprising 5 resting and sedentary, 2 low-intensity, and 3 high-intensity exercise phases, lasting an average of 19 minutes 27 seconds. Participants wore two wearables simultaneously during all activities: Fitbit Charge 4 and Samsung Galaxy Watch Active2. Reference heart rate data were recorded using a medically certified Holter electrocardiogram. The data of the reference and evaluated devices were synchronized and compared at 1-second intervals. The mean, mean absolute error, mean absolute percentage error, Lin concordance correlation coefficient, Pearson correlation coefficient, and Bland-Altman plots were analyzed.
Results: A total of 23 healthy adults (mean age 24.2, SD 4.6 years) participated in our study. Overall, and across all activities, the Fitbit Charge 4 slightly underestimated the heart rate, whereas the Samsung Galaxy Watch Active2 overestimated it (−1.66 beats per minute [bpm]/3.84 bpm). The Fitbit Charge 4 achieved a lower mean absolute error during resting and sedentary activities (seated rest: 7.8 vs 9.4; typing: 8.1 vs 11.6; laying down [left]: 7.2 vs 9.4; laying down [back]: 6.0 vs 8.6; and walking slowly: 6.8 vs 7.7 bpm), whereas the Samsung Galaxy Watch Active2 performed better during and after low- and high-intensity activities (standing up: 12.3 vs 9.0; walking fast: 6.1 vs 5.8; stairs: 8.8 vs 6.9; squats: 15.7 vs 6.1; resting: 9.6 vs 5.6 bpm).
Conclusions: Device accuracy varied with activity. Overall, both devices achieved a mean absolute percentage error of just <10%. Thus, they were considered to produce valid results based on the limits established by previous work in the field. Neither device reached sufficient accuracy during seated rest or keyboard typing. Thus, both devices may be eligible for use in respective studies; however, researchers should consider their individual study requirements.
Approximate Computing is a design paradigm that trades off computational accuracy for gains in non-functional aspects such as reduced area, increased computation speed, or power reduction. Computing the error of the approximated design is an essential step to determine its quality. The computation time for determining the error can become very large, effectively rendering the entire logic approximation procedure infeasible. In this work we extensively analyze various error metrics and approximation operations. We present methods to accelerate the computation of error metric computations by 1) exploiting structural information of the function obtained by applying the analyzed operations and 2) computing estimates of the metrics for multi-output Boolean functions represented as Binary Decision Diagrams (BDDs). We further present a novel greedy, bucket-based BDD minimization framework employing the newly proposed error metric computations to produce Pareto-optimal solutions with respect to BDD size and multiple error metrics. The applicability of the proposed minimization framework is demonstrated by an experimental evaluation. We can report considerable speedups while, at the same time, creating high-quality approximated BDDs. The presented framework is publicly available as open-source software on GitHub.
Millimeter-wave sensing using automotive radar imposes high requirements on the applied signal processing in order to obtain the necessary resolution for current imaging radar. High-resolution direction of arrival estimation is needed to achieve the desired spatial resolution, limited by the total antenna array aperture. This work gives an overview of the recent progress and work in the field of deep learning based direction of arrival estimation in the automotive radar context, i.e. using only a single measurement snapshot. Additionally, several deep learning models are compared and investigated with respect to their suitability for automotive angle estimation. The models are trained with model- and data-based approaches for data generation, including simulated scenarios as well as real measurement data from more than 400 automotive radar sensors. Finally, their performance is compared to several baseline angle estimation algorithms like the maximum-likelihood estimator. All results are discussed with respect to the estimation error, the resolution of closely spaced targets and the total estimation accuracy. The overall results demonstrate the viability and advantages of the proposed data generation methods, as well as super-resolution capabilities of several architectures.
Several reports about information security incidents in the last years show that the human factor is involved in over 80% of attack vectors.
This is why employees have to be aware of their crucial role in protecting information within organizations.
Awareness-raising measures are therefore often used to sensitize employees properly.
However, organizations require support to increase Information Security Awareness (ISA) in a planned and targeted manner.
Reviewing and evaluating the used awareness measures is an important aspect of increasing ISA within organizations.
Suitable measuring and evaluation methods are required to verify the effectiveness and success of the applied sensitization measures.
Therefore, the aim of this thesis is to provide organizations with measurement and assessment methods for ISA.
This thesis developed a Maturity Model (MM) in order to provide a suitable guideline for improving ISA in organizations.
The MM can also be used to evaluate the current state of an organization.
Since many MMs are criticized for not having a solid basis, this thesis made use of a scientific approach to develop the MM in a scientifically sound manner.
The development of the MM is based on item response theory and a polytomous extension of the Rasch model paired with hierarchical cluster analysis.
A survey was used to collect quantitative data to derive a definition for each level of the MM.
Participants in the survey rated the current situation in their organizations, making the individual difficulty levels of the MM reflect the current skill level of the organizations.
The MM was evaluated using a focus group with ISA experts and a workshop with potential end users.
The resulting MM has five maturity levels in four different dimensions.
Both the experts and the end users described the MM as practical and usable - even in its early stages.
Nevertheless, further evaluations are needed to empirically support these personal impressions.
In order to also assess ISA in organizations based on quantitative measurements, suitable tools are required to record and visualize the measurement results.
Both manual and automated approaches for measuring ISA were identified and developed.
Different aspects of ISA such as knowledge and habits were considered.
After testing different manual approaches, automation possibilities were examined.
The automation is achieved with the help of a software-based measurement method.
This software can be extended in the future for other aspects of ISA such as intention or salience.
Both the MM and the software support organizations in improving their ISA accordingly.
This cumulative dissertation followed the design science research paradigm.
The environment as well as the knowledge base are considered and used to develop artifacts.
The MM and the measurement methods represent these artifacts.
In order to contribute to the knowledge base, a total of seven papers have been published.
These papers contain partial results of this dissertation.
At the end of this thesis, all results are discussed in a common context and all findings are prepared for the next design science iteration.
Finally, an operational concept is proposed to show how the previous results can already be used in practice.
Predictive process monitoring (PPM) is the discipline of exploiting event logs of business processes to construct predictive models for anticipating different properties of running business processes. The event logs used contain control flow information of past process executions and, often, additional information about the context in which a process ran. As the process context can add valuable information to a predictive model, recent PPM techniques often incorporate it to improve process predictions. While most techniques incorporate context information as well-structured numerical and categorical context features, only a few utilize unstructured text from process-related comment fields, emails, or documents. The few existing text-aware PPM approaches are limited in capturing semantic information, as different meanings of the same word occurring in different contexts, i.e., sentences, are ignored.
This paper addresses this limitation by proposing a text-aware PPM technique using contextualized word embeddings to predict the next activity and the next timestamp of running process instances.
An experimental evaluation with a text-enriched real-life event log shows that our technique can outperform text-aware PPM approaches relying on non-contextualized word embeddings in terms of predictive performance.