Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (161)
- Article (115)
- conference proceeding (presentation, abstract) (35)
- Part of a Book (19)
- Preprint (10)
- Book (9)
- conference proceeding (volume) (3)
- Doctoral Thesis (3)
- Moving Images (2)
- Working Paper (2)
Is part of the Bibliography
- no (365)
Keywords
- Offshoring (13)
- Betriebliches Informationssystem (12)
- Informationstechnik (11)
- Datenschutz (10)
- Datensicherung (8)
- Elektronische Gesundheitskarte (8)
- Information systems (8)
- Internet of Things (8)
- Literaturbericht (8)
- Lean Management (7)
Institute
- Fakultät Informatik und Mathematik (365) (remove)
Begutachtungsstatus
- peer-reviewed (185)
- begutachtet (1)
Various fields of science face a reproducibility crisis. For quantum software engineering as an emerging field, it is therefore imminent to focus on proper reproducibility engineering from the start. Yet the provision of reproduction packages is almost universally lacking. Actionable advice on how to build such packages is rare, particularly unfortunate in a field with many contributions from researchers with backgrounds outside computer science. In this article, we argue how to rectify this deficiency by proposing a 1-2-3~approach to reproducibility engineering for quantum software experiments: Using a meta-generation mechanism, we generate DOI-safe, long-term functioning and dependency-free reproduction packages. They are designed to satisfy the requirements of professional and learned societies solely on the basis of project-specific research artefacts (source code, measurement and configuration data), and require little temporal investment by researchers. Our scheme ascertains long-term traceability even when the quantum processor itself is no longer accessible. By drastically lowering the technical bar, we foster the proliferation of reproduction packages in quantum software experiments and ease the inclusion of non-CS researchers entering the field.
In this paper we show how a feature-oriented development methodology can be exploited to investigate a large set of possible implementations for a real-time rendering algorithm. We rely on previously published work to explore potential dimensions of the implementation space of an algorithm to be run on a graphics processing unit (GPU) using CUDA. The main contribution of our paper is to provide a clear example of the benefit to be gained from existing methods in a domain that only slowly moves toward higher level abstractions. Our method employs a generative approach and makes heavy use of Common Lisp-macros before the code is ultimately transformed to CUDA.
In a variety of tomographic applications, data cannot be fully acquired, leading to severely underdetermined image reconstruction. Conventional methods result in reconstructions with significant artifacts. In order to remove these artifacts, regularization methods have to be applied that incorporate additional information. An important example is TV reconstruction which is well known to efficiently compensate for missing data and well reduces reconstruction artifacts. At the same time, however, tomographic data is also contaminated by noise, which poses an additional challenge. The use of a single regularizer within a variational regularization framework must therefore account for both the missing data and the noise. However, a single regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over different scales, in which case ℓ1 curvelet regularization methods work well. To address this issue, in this paper we introduce a novel variational regularization framework that combines the advantages of two different regularizers. The basic idea of our framework is to perform reconstruction in two stages, where the first stage mainly aims at accurate reconstruction in the presence of noise, and the second stage aims at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet-TV approach. We define and implement a curvelet transform adapted to the limited view problem and demonstrate the advantages of our approach in a series of numerical experiments in this context.
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on-the-fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this paper, we present a novel solution to this problem. We propose a compression scheme for a-priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
We present a novel derivative-based parameter identification method to improve the precision at the tool center point of an industrial manipulator. The tool center point is directly considered in the optimization as part of the problem formulation as a key performance indicator. Additionally, our proposed method takes collision avoidance as special nonlinear constraints into account and is therefore suitable for industrial use. The performed numerical experiments show that the optimum experimental designs considering key performance indicators during optimization achieve a significant improvement in comparison to other methods. An improvement in terms of precision at the tool center point of 40% to 44% was achieved in experiments with three KUKA robots and 90 notional manipulator models compared to the heuristic experimental designs chosen by an experimenter as well as 10% to 19% compared to an existing state-of-the-art method.
Moving Object Databases are designed to store and process database objects with attributes that can change over time. Simple examples are moving points, that change position over time, a bit more complex are moving regions, that can also change shape. The spatial and spatiotemporal object types in current moving objects databases are limited to two dimensions. This work strives to extend the set of spatial moving object types into the third and even higher dimensions while preserving a consistent family of operations for it. A robust algorithm for the interpolation of two regions to a moving region of any dimensionality is developed, as well as the fundamental ideas for several other operations.
We address a practical challenge in agile web development against NoSQL data stores: Upon a new release of the web application, entities already persisted in production no longer match the application code. Rather than migrating all legacy entities eagerly (prior to the release) and at the cost of application downtime, lazy data migration is a popular alternative: When a legacy entity is loaded by the application, all pending structural changes are applied. Yet correctly migrating legacy data from several releases back, involving more than one entity at-a-time, is not trivial. In this paper, we propose a holistic Datalog ¬non-rec model model for reading, writing, and migrating data. In implementing our model, we may blend established Datalog evaluation algorithms, such as an incremental evaluation with certain rules evaluated bottom-up, and certain rules evaluated top-down with sideways information passing. Our systematic approach guarantees that from the viewpoint of the application, it remains transparent whether data is migrated eagerly or lazily.
Purpose
Age-related macular degeneration (AMD) is a common threat to vision. While classification of disease stages is critical to understanding disease risk and progression, several systems based on color fundus photographs are known. Most of these require in-depth and time-consuming analysis of fundus images. Herein, we present an automated computer-based classification algorithm.
Design Algorithm development for AMD classification based on a large collection of color fundus images. Validation is performed on a cross-sectional, population-based study.
Participants.
We included 120 656 manually graded color fundus images from 3654 Age-Related Eye Disease Study (AREDS) participants. AREDS participants were >55 years of age, and non-AMD sight-threatening diseases were excluded at recruitment. In addition, performance of our algorithm was evaluated in 5555 fundus images from the population-based Kooperative Gesundheitsforschung in der Region Augsburg (KORA; Cooperative Health Research in the Region of Augsburg) study.
Methods.
We defined 13 classes (9 AREDS steps, 3 late AMD stages, and 1 for ungradable images) and trained several convolution deep learning architectures. An ensemble of network architectures improved prediction accuracy. An independent dataset was used to evaluate the performance of our algorithm in a population-based study.
Main Outcome Measures.
κ Statistics and accuracy to evaluate the concordance between predicted and expert human grader classification.
Results.
A network ensemble of 6 different neural net architectures predicted the 13 classes in the AREDS test set with a quadratic weighted κ of 92% (95% confidence interval, 89%–92%) and an overall accuracy of 63.3%. In the independent KORA dataset, images wrongly classified as AMD were mainly the result of a macular reflex observed in young individuals. By restricting the KORA analysis to individuals >55 years of age and prior exclusion of other retinopathies, the weighted and unweighted κ increased to 50% and 63%, respectively. Importantly, the algorithm detected 84.2% of all fundus images with definite signs of early or late AMD. Overall, 94.3% of healthy fundus images were classified correctly.
Conclusions
Our deep learning algoritm revealed a weighted κ outperforming human graders in the AREDS study and is suitable to classify AMD fundus images in other datasets using individuals >55 years of age.
Every open source project needs to decide on an open source license. This decision is of high economic relevance: Just which license is the best one to help the project grow and attract a community? The most common question is: Should the project choose a restrictive (reciprocal) license or a more permissive one? As an important step towards answering this question, this paper analyses actual license choice and correlated project growth from ten years of open source projects. It provides closed analytical models and finds that around 2001 a reversal in license choice occurred from restrictive towards permissive licenses.
Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics4,5,6,7,8,9,10,11. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique12,13,14,15. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.
This work integrates two distinct research areas of parallel and distributed computing, (1) automatic loop parallelization, and (2) component-based Grid programming. The latter includes technologies developed within CoreGRID for simplifying Grid programming: the Grid ComponentModel (GCM) and Higher- Order Components (HOCs). Components support developing applications on the Grid without taking all the technical details of the particular platform type into account (network communication, heterogeneity, etc.). The GCMenables a hierarchical composition of programpieces and HOCs enable the reuse of component code in the development of new applications by specifying application-specific operations in a program via code parameters. When a programmer is provided, e. g., with a compute farm HOC, only the independent worker tasks must be described. But, once an application exhibits data or control dependences, the trivial farm is no longer sufficient. Here, the power of loop parallelization tools, like LooPo, comes into play: by embedding LooPo into a HOC, we show that these two technologies in combination facilitate the automatic transformation of a sequential loop nest with complex dependences (supplied by the user as a HOC parameter) into an ordered task graph, which can be processed on the Grid in parallel. This technique can significantly simplify GCM-based systems which combine multiple HOCs and other components. We use an equation system solver based on the successive overrelaxation method (SOR) as our motivating application example and for performance experiments.
Over the last decade a number of high performance, domain-specific languages (DSLs) have started to grow and help tackle the problem of ever diversifying hard- and software employed in fields such as HPC (high performance computing), medical imaging, computer vision etc. Most of those approaches rely on frameworks such as LLVM for efficient code generation and, to reach a broader audience, take input in C-like form. In this paper we present a DSL for image processing that is on-par with competing methods, yet its design principles are in strong contrast to previous approaches. Our tool chain is much simpler, easing the burden on implementors and maintainers, while our output, C-family code, is both adaptable and shows high performance. We believe that our methodology provides a faster evaluation of language features and abstractions in the domains above.
Subdivision surfaces, especially with displacement, are one of the key modeling primitives used in high-quality rendering environments, such as, e.g., movie production. While their use easily maps to rasterization-based frameworks, they pose a significant challenge for ray tracing environments. This is due to the fact that incoherent access patterns require storing or caching fully tessellated and displaced meshes for efficient intersection computations. In this paper we use a two-tier hierarchy built on a scene's patches. It relies on compressed and quantized bounding volumes on the second tier to reduce the size of the BVH itself. Based on this acceleration structure, we propose a quantized, compact approximation for leaf nodes while being faithful to the underlying patch-geometry. We build on recent advances and present a system that shows competitive performance regarding run-time speed, which is close to full-resolution pre-tessellation methods as well as to previous compression approaches. Ultimately, we provide strong compression of up to a factor of 5: 1 compared to state-of-the-art methods while maintaining high geometrical fidelity surpassing similarly compact approximations and getting close to uncompressed geometry.
Computational grids combine computers in the Internet for distributed data processing and are an attractive platform for the data-intensive applications of bioinformatics. We present an extensible genome processing software for the grid and evaluate its performance. Our software was able to discover previously unknown circular permutations (CP) in the ProDom database containing more than 70MB of protein data. A specific feature of our software is its design as a component: the Alignment HOC, a Higher-Order Component that makes use of the latest Globus toolkit as grid middleware. Besides genome data, the Alignment HOC accepts plugin code for processing this data as its input, and contains all the required configuration to run the component on top of Globus, thus, freeing the non-grid-expert user from dealing with grid middleware. Instead of writing data distribution procedures and configuring the middleware appropriately for every new algorithm, Alignment HOC users reuse the existing component and only write application-specific plugins. To maintain plugins persistently in a reusable manner, we built a web-accessible plugin database with a comfortable administration GUI. The flexible component-based implementation makes it easy to study CPs in other databases (e.g. UniProt/Swiss-Prot) or to use an alignment algorithm different than the standard Needleman-Wunsch. For the efficient distribution of workload, we developed a library of group communication operations for HOCs.
In this paper, we present a new Hybrid Genetic Search (HGS) algorithm for solving the Capacitated Vehicle Routing Problem for Pickup and Delivery (CVRPPD) as it is required for public transport in rural areas. One of the biggest peculiarities here is that a large area has to be covered with as few vehicles as possible. The basic idea of this algorithm is based on a more general version of HGS, which we adopted to solve the CVRPPD in rural areas. It also implements improvements that lead to the acceleration of the algorithm and, thereby, to a faster generation of a fastest route. We tested the algorithm on real road data from Roding, a rural district in Bavaria, Germany. Moreover, we designed an API for converting data from the Openrouteservice, so that our algorithm can be applied on real world examples as well.
A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer
(2019)
he Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implementation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.
Introduction: Improving energy efficiency and reducing energy wastage is an important topic of our time. But it is quite difficult to figure out how much of our total electricity bill can be mapped to which device or at what time the device used it. We believe energy efficiency of normal households can be improved, if this kind of transparency would be available. In this article, we present a system for energy measurement at mains sockets to gain a transparent view of energy consumption for each device in a household. It consists of several smart energy measuring devices (SEMDs) that use a low-power radio protocol to dynamically build and connect to a radio network to transfer power usage date to a server. At the server, the data is stored and can be accessed via web interface.
Results: Our primary goal was to build a back-end system for an energy metering platform with very low energy consumption. This platform can provide data for a variety of services that enables users (the consumers) to understand and improve their energy consumption behavior and increase overall energy efficiency of their households.
In this paper a new method for placing bus stops is presented. The method is suitable for permanently installed new bus stops and temporarily chosen collection points for call busses as well. Moreover, our implementation of the Voronoi algorithm chooses new locations for bus stops in such a way that more bus stops are set in densely populated areas and less in less populated areas. To achieve this goal, a corresponding weighting is applied to each possible placement point, based on the number of inhabitants around this point and the points of interest, such as medical centers and department stores around this point. Using the area of Roding, a small town in Bavaria, for a case study, we show that our method is especially suitable for for rural areas, where there are few multi-family houses or apartment blocks and the area is not densely populated.
Background and Objective: Even today, pointing out an exam that can diagnose a patient with Parkinson's disease (PD) accurately enough is not an easy task. Although a number of techniques have been used in search for a more precise method, detecting such illness and measuring its level of severity early enough to postpone its side effects are not straightforward. In this work, after reviewing a considerable number of works, we conclude that only a few techniques address the problem of PD recognition by means of micrography using computer vision techniques. Therefore, we consider the problem of aiding automatic PD diagnosis by means of spirals and meanders filled out in forms, which are then compared with the template for feature extraction.
Methods: In our work, both the template and the drawings are identified and separated automatically using image processing techniques, thus needing no user intervention. Since we have no registered images, the idea is to obtain a suitable representation of both template and drawings using the very same approach for all images in a fast and accurate approach.
Results: The results have shown that we can obtain very reasonable recognition rates (around approximate to 67%), with the most accurate class being the one represented by the patients, which outnumbered the control individuals in the proposed dataset.
Conclusions: The proposed approach seemed to be suitable for aiding in automatic PD diagnosis by means of computer vision and machine learning techniques. Also, meander images play an important role, leading to higher accuracies than spiral images. We also observed that the main problem in detecting PD is the patients in the early stages, who can draw near-perfect objects, which are very similar to the ones made by control patients. (C) 2016 Elsevier Ireland Ltd. All rights reserved.
We propose a new framework for limited angle tomographic reconstruction. Our approach is based on the observation that for a given acquisition geometry only a few (visible) structures of the object can be reconstructed reliably using a limited angle data set. By formulating this problem in the curvelet domain, we can characterize those curvelet coefficients which correspond to visible structures in the image domain. The integration of this information into the formulation of the reconstruction problem leads to a considerable dimensionality reduction and yields a speedup of the corresponding reconstruction algorithms.
The advent of multi-core CPUs in nearly all embedded markets has prompted an architectural trend towards combining safety critical and uncritical software on single hardware units. We present a novel architecture for mixed criticality systems based on Linux that allows us to consolidate critical and uncritical parts onto a single hardware unit. CPU virtualisation extensions enable strict and static partitioning of hardware by direct assignment of resources, which allows us to boot additional operating systems or bare metal applications running aside Linux. The hypervisor Jailhouse is at the core of the architecture and ensures that the resulting domains may serve workloads of different criticality and can not interfere in an unintended way. This retains Linux’s feature-richness in uncritical parts, while frugal safety and real-time critical applications execute in isolated domains. Architectural simplicity is a central aspect of our approach and a precondition for reliable implementability and successful certification. While standard virtualisation extensions provided by current hardware seem to suffice for a straight forward implementation of our approach, there are a number of further limitations that need to be worked around. This paper discusses the arising issues, and evaluates the suitability of our approach for real-world safety and real-time critical scenarios.
We present a paradigm for characterization of artifacts in limited data tomography problems. In particular, we use this paradigm to characterize artifacts that are generated in reconstructions from limited angle data with generalized Radon transforms and general filtered backprojection type operators. In order to find when visible singularities are imaged, we calculate the symbol of our reconstruction operator as a pseudodifferential operator.
In the realm of parallel computing, optimization plays a pivotal role in achieving efficient and scalable solutions. In this work, we present the parallelization of a hybrid genetic search for solving the Capacitated Vehicle Routing Problem with Pickup and Delivery (CVRPPD).It leverages the synergy between genetic algorithms and parallel computing to address the complex optimization problem. This hybrid algorithm combines a customized version of local search with a genetic algorithm to compute an effective solution. Our implementation makes use of the Message Passing Interface (MPI) for data distribution and parallel execution. In addition, we run multi-threaded processes on NVIDIA graphical processors using the CUDA technology, which further increases the computation speed and consequently minimizes the runtime. Parallelization also allows the best-improvement strategy to be used instead of the rst-improvement strategy while maintaining the same runtime. We store the resulting routes in a bus route database which we created as the basis of an extensive library of optimal routes for our specifc use case of optimizing bus routes in a rural area. The experimental results on real road data show that the parallel implementation of the Hybrid Genetic Search (HGS) achieves significant improvements in runtime over the sequential implementation above a certain problem size. We believe that our implementation of the parallel hybrid genetic search method can have a great in influence on optimization strategies in parallel computing and can also be applied to other subproblems of the VRP.
Adaptive Moment Estimation (Adam) is a very popular training algorithm for deep neural networks, implemented in many machine learning frameworks. To the best of the authors knowledge no complete convergence analysis exists for Adam. The contribution of this paper is a method for the local convergence analysis in batch mode for a deterministic fixed training set, which gives necessary conditions for the hyperparameters of the Adam algorithm. Due to the local nature of the arguments the objective function can be non-convex but must be at least twice continuously differentiable.
Measurement is the only part of a general quantum system that has yet to be characterised experimentally in a complete manner. Detector tomography provides a procedure for doing just this; an arbitrary measurement device can be fully characterised, and thus calibrated, in a systematic way without access to its components or its design. The result is a reconstructed POVM containing the measurement operators associated with each measurement outcome. We consider two detectors, a single-photon detector and a photon-number counter, and propose an easily realised experimental apparatus to perform detector tomography on them. We also present a method of visualising the resulting measurement operators.
A quantitative analysis of culture-induced differences in pivotal IT outsourcing contract features
(2019)
More than 30 years after its first implementation, IT outsourcing (ITO) is unanimously considered a critical component of corporate strategy for private and public institutions alike. While implementations of ITO around the world share some common characteristics like typical reasons for outsourcing, key success factors, or dimensions along which they can be classified, extant research also points to regional differences. However, research on this topic, specifically regarding pivotal contract features like contract value, contract length, or pricing methods, is still in its infancy, and quantitative analyses on the subject are particularly scarce. We address this research gap by analyzing data on 14,917 ITO contracts closed between 2007 and 2017 through the lens of cultural regions and three statistical methods. The contribution of our paper is threefold. First, our descriptive analysis points to globally decreasing contract lengths and contract values, confirming previous studies and practice reports. Second, an ANOVA with independent post-hoc testing provides quantitative supportfor the degree of dissimilarity among individual regions in pivotal ITO contract features. Finally, our quantitative replication of a previous study identifies culture-induced regional differences between USA and Japan regarding the effect of influence factors on ITO contract features.
In the engineering domain, representing real-world objects using a body of data, called a digital twin, which is frequently updated by “live” measurements, has shown various advantages over tradi- tional modelling and simulation techniques. Consequently, urban planners have a strong interest in digital twin technology, since it provides them with a laboratory for experimenting with data before making far-reaching decisions. Realizing these decisions involves the work of professionals in the architecture, engineering and construction (AEC) domain who nowadays collaborate via the methodology of building information modeling (BIM). At the same time, the citizen plays an integral role both in the data acquisition phase, while also being a beneficiary of the improved resource management strategies. In this paper, we present a prototype for a “digital energy twin” platform we designed in cooperation with the city of Regensburg. We show how our extensible platform de- sign can satisfy the various requirements of multiple user groups through a series of data processing solutions and visualizations, in- dicating valuable design and implementation guidelines for future projects. In particular, we focus on two example use cases concern- ing building electricity monitoring and BIM. By implementing a flexible data processing architecture we can involve citizens in the data acquisition process, meeting the demands of modern users regarding maximum transparency in the handling of their data.
A Step Towards the Automated Diagnosis of Parkinson's Disease: Analyzing Handwriting Movements
(2015)
Parkinson’s disease (PD) has affected millions of people world-wide, being its major problem the loss of movements and, consequently, the ability of working and locomotion. Although we can find several works that attempt at dealing with this problem out there, most of them make use of datasets composed by a few subjects only. In this work, we present some results toward the automated diagnosis of PD by means of computer vision-based techniques in a dataset composed by dozens of patients, which is one of the main contributions of this work. The dataset is part of a joint research project that aims at extracting both visual and signal-based information from healthy and PD patients in order to go forward the early diagnosis of PD patients. The dataset is composed by handwriting clinical exams that are analyzed by means of image processing and machine learning techniques, being the preliminary results encouraging and promising. Additionally, a new quantitative feature to measure the amount of tremor of an individual’s handwritten trace called Mean Relative Tremor is also presented.
The Internet of Things (IoT) is widely used as a
synonym for nearly every connected device. This makes it really
difficult to find the right kind of scientific publication for the
intended category of IoT. Conferences and other events for
IoT are confusing about the target group (consumer, enterprise,
industrial, etc.) and standardisation organisations suffer from
the same problem. To demonstrate these problems, this paper
shows the results of an analyses over IoT publications in different
research libraries. The number of results for IoT, consumer,
enterprise and industrial search queries were evaluated and a
manual study about 100 publications was done. According to
the research library or search engine, different results about
the distribution of consumer-, enterprise- and industrial- IoT
are visible. The comparison with the results of the manual
evaluation shows that some search queries do not show all desired
publications or that considerably more, unwanted results are
returned. Most researchers do not use the keywords right and
the exact category of IoT can only be accessed via the abstract.
This shows major problems with the use of the term IoT and its
minor limitations.
Controller Area Network (CAN) is still the most used network technology in today's connected cars. Now and in the near future, penetration tests in the area of automotive security will still require tools for CAN media access. More and more open source automotive penetration tools and frameworks are presented by researchers on various conferences, all with different properties in terms of usability, features and supported use-cases. Choosing a proper tool for security investigations in automotive network poses a challenge, since lots of different solutions are available. This paper compares currently available CAN media access solutions and gives advice on competitive hard-and software tools for automotive penetration testing.
Incomplete conceptualization of the information technology outsourcing (ITO) literature represents a challenge for navigating extant research and engaging into purposeful academic discourse. We extend the analysis of empirical findings on determinants of ITO decisions, outcomes, and governance. We identify increasing levels of research maturity, analyze effects of 38 new independent variables, highlight contradictory findings, and observe increasing interest in emerging topics such as innovation through ITO and multisourcing.
Computer-based Improvements of waste collection and public transport procedures are often a part of smart city initiatives. When we envision an ideal bus network, it will primarily connect the most crowded bus stops. Similarly, an ideal waste collection vehicle will arrive at every container exactly at the time when it is fully loaded. Beyond doubt, this will reduce traffic and support environmentally friendly intentions like waste separation, as it will make more containers manageable. A difficulty of putting that vision into practice is that vehicles cannot always be where they are needed. Knowing the best time for arriving at a position is not insufficient for finding the optimal route. Therefore, we compare four different approaches to optimized routing: Regensburg, Christchurch, Malaysia, and Bangalore. Our analysis shows that the best schedules result from adapting field-tested routes frequently based on sensor measurements and route optimizing computations.
We present a ready to compute trace formula for Hecke operators on vector-valued modular forms of integral weight for SL2(Z) transforming under the Weil representation. As a corollary, we obtain a ready to compute dimension formula for the corresponding space of vector-valued cusp forms, which is more general than the dimension formulae previously published in the vector-valued setting.
We suggest that parallel software components used for grid computing should be adaptable to application-specific requirements, instead of developing new components from scratch for each particular application. As an example, we take a parallel farm component which is “embarrassingly parallel”, i. e., free of dependencies, and adapt it to the wavefront processing pattern with dependencies that impact its behavior. We describe our approach in the context of Higher-Order Components (HOCs), with the Java-based system Lithium as our implementation framework. The adaptation process relies on HOCs’ mobile code parameters that are shipped over the network of the grid. We describe our implementation of the proposed component adaptation method and report first experimental results for a particular grid application — the alignment of DNA sequence pairs, a popular, time-critical problem in computational molecular biology.
Automotive Original Equipment Manufacturer (OEM) and suppliers started shifting their focus towards the security of their connected electronic programmable products recently since cars used to be mainly mechanical products. However, this has changed due to the rising digitalization of vehicles. Security and functional safety have grown together and need to be addressed as a single issue, referred to as automotive security, in the following article. One way to accomplish security is automotive security education. The scientific contribution of this paper is to establish an Automotive Penetration Testing Education Platform (APTEP). It consists of three layers representing different attack points of a vehicle. The layers are the outer, inner, and core layers. Each of those contains multiple interfaces, such as Wireless Local Area Network (WLAN) or electric vehicle charging interfaces in the outer layer, message bus systems in the inner layer, and debug or diagnostic interfaces in the core layer. One implementation of APTEP is in a hardware case and as a virtual platform, referred to as the Automotive Network Security Case (ANSKo). The hardware case contains emulated control units and different communication protocols. The virtual platform uses Docker containers to provide a similar experience over the internet. Both offer two kinds of challenges. The first introduces users to a specific interface, while the second combines multiple interfaces, to a complex and realistic challenge. This concept is based on modern didactic theory, such as constructivism and problem-based learning. Computer Science students from the Ostbayerische Technische Hochschule (OTH)Regensburg experienced the challenges as part of a special topic course and provided positive feedback.
Computing a sample mean of time series under dynamic time warping is NP-hard. Consequently, there is an ongoing research effort to devise efficient heuristics. The majority of heuristics have been developed for the constrained sample mean problem that assumes a solution of predefined length. In contrast, research on the unconstrained sample mean problem is underdeveloped. In this article, we propose a generic average-compress (AC) algorithm to address the unconstrained problem. The algorithm alternates between averaging (A-step) and compression (C-step). The A-step takes an initial guess as input and returns an approximation of a sample mean. Then the C-step reduces the length of the approximate solution. The compressed approximation serves as initial guess of the A-step in the next iteration. The purpose of the C-step is to direct the algorithm to more promising solutions of shorter length. The proposed algorithm is generic in the sense that any averaging and any compression method can be used. Experimental results show that the AC algorithm substantially outperforms current state-of-the-art algorithms for time series averaging.
Abstract Social network analysis is extremely well supported by the R community and is routinely used for studying the relationships between people engaged in collaborative activities. While there has been rapid development of new approaches and metrics in this field, the challenging question of validity (how well insights derived from social networks agree with reality) is often difficult to address. We propose the use of several R packages to generate interactive surveys that are specifically well suited for validating social network analyses. Using our web-based survey application, we were able to validate the results of applying community-detection algorithms to infer the organizational structure of software developers contributing to open-source projects.
This article provides a mathematical analysis of singular (nonsmooth) artifacts added to reconstructions by filtered backprojection (FBP) type algorithms for X-ray computed tomography (CT) with arbitrary incomplete data. We prove that these singular artifacts arise from points at the boundary of the data set. Our results show that, depending on the geometry of this boundary, two types of artifacts can arise: object-dependent and object-independent artifacts. Object-dependent artifacts are generated by singularities of the object being scanned, and these artifacts can extend along lines. They generalize the streak artifacts observed in limited-angle tomography. Object-independent artifacts, on the other hand, are essentially independent of the object and take one of two forms: streaks on lines if the boundary of the data set is not smooth at a point and curved artifacts if the boundary is smooth locally. We prove that these streak and curve artifacts are the only singular artifacts that can occur for FBP in the continuous case. In addition to the geometric description of artifacts, the article provides characterizations of their strength in Sobolev scale in certain cases. The results of this article apply to the well-known incomplete data problems, including limited-angle and regionof-interest tomography, as well as to unconventional X-ray CT imaging setups that arise in new practical applications. Reconstructions from simulated and real data are analyzed to illustrate our theorems, including the reconstruction that motivated this work a synchrotron data set in which artifacts appear on lines that have no relation to the object.
The paper presents a research model and a measurement instrument for a research-in-progress study on the antecedents of success in IS offshoring projects. In this empirical-confirmatory study, we intend to analyse the impact of the constructs “offshoring expertise”, “trust in offshore service provider”, “project suitability”, “knowledge transfer”, and “liaison quality” on offshore project success. Constructs and indicators are derived from an extensive literature review. We plan to formulate a structural equation model and to test it using partial least squares (PLS) as an analysis technique. Our research model addresses the paucity of research that quantitatively examines offshoring success.
We evaluate the applicability of quantum computing on two fundamental query optimization problems, join order optimization and multi query optimization (MQO). We analyze the problem dimensions that can be solved on current gate-based quantum systems and quantum annealers, the two currently commercially available architectures.
First, we evaluate the use of gate-based systems on MQO, previously solved with quantum annealing. We show that, contrary to classical computing, a different architecture requires involved adaptations. We moreover propose a multi-step reformulation for join ordering problems to make them solvable on current quantum systems. Finally, we systematically evaluate our contributions for gate-based quantum systems and quantum annealers. Doing so, we identify the scope of current limitations, as well as the future potential of quantum computing technologies for database systems.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Many problems of industrial interest are NP-complete, and quickly exhaust resources of computational devices with increasing input sizes. Quantum annealers (QA) are physical devices that aim at this class of problems by exploiting quantum mechanical properties of nature. However, they compete with efficient heuristics and probabilistic or randomised algorithms on classical machines that allow for finding approximate solutions to large NP-complete problems. While first implementations of QA have become commercially available, their practical benefits are far from fully explored. To the best of our knowledge, approximation techniques have not yet received substantial attention. In this paper, we explore how problems' approximate versions of varying degree can be systematically constructed for quantum annealer programs, and how this influences result quality or the handling of larger problem instances on given set of qubits. We illustrate various approximation techniques on both, simulations and real QA hardware, on different seminal problems, and interpret the results to contribute towards a better understanding of the real-world power and limitations of current-state and future quantum computing.
Increasing cyber-attacks on Internet of Things (IoT) environments are a growing problem of digitized households worldwide. The purpose of this study is to investigate how an intelligent Intrusion Detection System (iIDS) can provide more security in IoT networks with a novel architecture, combining
multiple classical and machine learning approaches. By combining classical security analysis methods and modern concepts of artificial intelligence, we increase the quality of attack detection and can therefore conduct dedicated attack suppression. The architectural image of the iIDS consists of different layers, which in parts achieve self-sufficient results. The results of
the different modules are calculated by means of statement variables and evaluation techniques adapted for the individual module elements and subsequently combined by limit value considerations. The architecture image combines approaches for the analysis and processing of IoT network traffic and
evaluates it to an aggregated score. From this result it can be determined whether the analyzed data indicates device misuse or attempted break-ins into the network. This study answers the questions whether a connection between classical and modern concepts for monitoring and analyzing IoT network traffic can be implemented meaningfully within a reliable architecture of an
iIDS.
Debian, as a collection of software packages and components, is known to be one of the largest software projects in the history of mankind. Combined with a traceable history over many years, the artefacts created by Debian developers and users make it one of science’s favourite targets to quantitatively or qualitatively understand how real-world software development works (or does not), how people collaborate, and many other other related questions. Unfortunately, while scientists make ample use of the resources and artefacts created by FLOSS and friends, the exchange of insights and ideas does not seem to extend in both directions: Developers, users and integrators are often unaware of results obtained in science. This talk will introduce the Debian community to a selection the most important results obtained by scientific (software engineering) research, with a special focus on large-scale socio-technical analysis of projects like Debian, and the possible implications and improvements these may bring to Debian development itself.
Artifacts in Incomplete Data Tomography with Applications to Photoacoustic Tomography and Sonar
(2015)
We develop a paradigm using microlocal analysis that allows one to characterize the visible and added singularities in a broad range of incomplete data tomography problems. We give precise characterizations for photoacoustic and thermoacoustic tomography and sonar, and provide artifact reduction strategies. In particular, our theorems show that it is better to arrange sonar detectors so that the boundary of the set of detectors does not have corners and is smooth. To illustrate our results, we provide reconstructions from synthetic spherical mean data as well as from experimental photoacoustic data.
Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection.
A method for automated segmentation of vocal cords in stroboscopic video sequences is presented.
In contrast to earlier approaches, the inner and outer contours of the vocal cords are independently delineated. Automatic segmentation of the low contrasted images is carried out by connecting the shape constraint of a point distribution model to a multi-channel regionbased balloon model. This enables us to robustly compute a vibration profile that is used as a new diagnostic tool to visualize several vibration parameters in only one graphic. The vibration profiles are studied in two cases: one physiological vibration and one functional pathology.
While high-level software components simplify the programming of grid applications and Web services increase their interoperability, developing such components and configuring the interconnecting services is a demanding task. In this paper, we consider the combination of Higher-Order Components (HOCs) with the Fractal component model and the ProActive library.
HOCs are parallel programming components, made accessible on the grid via Web services that use a special class loader enabling code mobility: executable code can be uploaded to a HOC, allowing one to customize the HOC. Fractal simplifies the composition of components and the ProActive library offers a generator for automatically creating Web services from components composed with Fractal, as long as all the parameters of these services have primitive types.
Taking all the advantages of HOCs, ProActive and Fractal together, the obvious conclusion is that composing HOCs using Fractal and automatically exposing them as Web services on the grid via ProActive minimizes the required efforts for building complex grid systems. In this context, we solved the problem of exchanging code-carrying parameters in automatically generated Web services by integrating the HOC class loading mechanism into the ProActive library.
It remains difficult to segregate pelagic habitats since structuring processes are dynamic on a wide range of scales and clear boundaries in the open ocean are non-existent. However, to improve our knowledge about existing ecological niches and the processes shaping the enormous diversity of marine plankton, we need a better understanding of the driving forces behind plankton patchiness. Here we describe a new machine-learning method to detect and quantify pelagic habitats based on hydrographic measurements. An Autoencoder learns two-dimensional, meaningful representations of higher-dimensional micro-habitats, which are characterized by a variety of biotic and abiotic measurements from a high-speed ROTV. Subsequently, we apply a density-based clustering algorithm to group similar micro-habitats into associated pelagic macro-habitats in the German Bight of the North Sea. Three distinct macro-habitats, a “surface mixed layer,” a “bottom layer,” and an exceptionally “productive layer” are consistently identified, each with its distinct plankton community. We provide evidence that the model detects relevant features like the doming of the thermocline within an Offshore Wind Farm or the presence of a tidal mixing front.
This talk will provide a general overview on how Scapy can be used for automotive penetration testing. All present features of Scapy for automotive penetration will be introduced and explained. Also an overview of higher level automotive protocols will be given.
As automotive penetration testing becomes more important, the lack of free tools for automotive network penetration testing led us to integrate new features in Scapy. Scapy is a well established framework for packet manipulation. The flexibility of Scapy allowed us to implement automotive interfaces (CAN) and automotive protocols (ISOTP, GMLAN, UDS, DoIP, OBD-II).
This talk explains the basics of these automotive protocols, the workflow with Scapy for automotive network penetration testing. A live demonstration with some embedded hardware will be given.
This paper introduces a custom framework for benchmarking software implementations from the National Institute of Standards and Technology (NIST) Lightweight Cryptography (LWC) project on embedded devices. We present the design and core functions of the framework and apply it to various NIST LWC authenticated encryption with associated data (AEAD) ciphers. Altogether, we evaluate the speed of 213 submitted algorithm vari-ants on four different microcontroller units (MCUs), including 32 bit ARM and 8 bit AVR architectures. To allow a more meaningful comparison, we also conduct code size tests on all four boards and RAM utilization tests on one test platform.
Bestimmung der Lichtquellenfarbe bei der Endoskopie mikrotexturierter Oberflächen des Kehlkopfes
(1999)
Zur Unterstützung der Diagnose von Stimmlippenerkrankungen werden innerhalb des Forschungsprojektes Quantitative Digitale Laryngoskopie objektive Parameter zur Beschreibung der Bewegung, der Farbe sowie der Form der Stimmlippen entwickelt und klinisch evaluiert. Während die Bewegungsanalyse Aufschluß über funktionelle Stimmstörungen gibt, beschreiben Parameter der Farb- und Formanalyse morphologische Veränderungen des Stimmlippengewebes. In diesem Beitrag werden die Methoden und bisherigen Ergebnisse zur Bewegungs- und Farbanalyse vorgestellt.
Die Bewegungsanalyse wurde mit einem erweiterten Konturmodell (Snakes) durchgeführt. Aufgrund des modifizierten Konturmodells konnten die Konturen der Stimmlippen automatisch über die gesmate Bildsequenz zuverlässig detektiert werden. Die Vermssung der Konturen liefert neue quantitative Parameter zur Befundung von laryngoskopischen Stimmlippenaufnahmen.
Um die Farbeigenschaften der Stimmlippen zu bestimmen, wurde ausgehend vom RGB-Bild die Objektfarbe unabhängig von der Farbe der Lichtquelle durch Verwendung von Clusterverfahren und der Viertelkreisanalyse berechnet. Mit dieser Farbanalyse konnte die Farbe der Lichtquelle ermittelt und das beleuchtungsunabhängige Farbbild berechnet werden. Die Quanitifizierung der Rötung der Stimmlippen ist z.B. ein entscheidendes Kriterium zur Diagnostik der akuten Laryngitis.
Ascertaining reproducibility of scientific experiments is receiving increased attention across disciplines. We argue that the necessary skills are important beyond pure scientific utility, and that they should be taught as part of software engineering (SWE) education. They serve a dual purpose: Apart from acquiring the coveted badges assigned to reproducible research, reproducibility engineering is a lifetime skill for a professional industrial career in computer science.
SWE curricula seem an ideal fit for conveying such capabilities, yet they require some extensions, especially given that even at flagship conferences like ICSE, only slightly more than one-third of the technical papers (at the 2021 edition) receive recognition for artefact reusability. Knowledge and capabilities in setting up engineering environments that allow for reproducing artefacts and results over decades (a standard requirement in many traditional engineering disciplines), writing semi-literate commit messages that document crucial steps of a decision-making process and that are tightly coupled with code, or sustainably taming dynamic, quickly changing software dependencies, to name a few: They all contribute to solving the scientific reproducibility crisis, and enable software engineers to build sustainable, long-term maintainable, software-intensive, industrial systems. We propose to teach these skills at the undergraduate level, on par with traditional SWE topics.
Bierdeckelsalto
(2015)
Ein beliebtes Spiel besteht darin, einen auf einer Tischkante liegenden Bierdeckel von unten mit den ausgestreckten Fingern hochzuschnellen und dann nach einem oder mehreren Saltos zwischen Finger und Daumen wieder aufzufangen. Physikalisch gesehen, übt man einen Stoß auf den Bierdeckel aus. Die Anwendung von Impuls- und Drehimpulssatz führt zu einfachen Abschätzungen zur Mechanik des Bierdeckelsaltos. Mit Physik-Simulationsprogrammen lässt sich dieses Experiment nachvollziehen. Highspeed-Videos ergänzen Theorie und Simulation.
In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2020 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m.
The partitioning hypervisor Jaihouse allows us to run safety critical and uncritical applications in parallel on a single SoC. We present our experiences when porting a safety and real-time critical existing application as a Jailhouse guest. It shows a novel and promising approach for implementing mixed-criticality applications with real-time requirement while not loosing the benefits of Linux. This is done by static partitioning of hardware resources; guests do not interfere. We will present a multicopter platform running the real-time critical flight stack in an isolated Jailhouse guest. This proves the practicability of Jailhouse as well as the suitability for real-time safety critical systems by porting an existing application to a Jailhouse cell. We stress its concept and show up current hardware limitations, like undesired behaviour and present possible workarounds and solutions.
In recent years, mobility solutions have experienced a significant upswing. Consequently, it has increased the importance of forecasting the number of passengers and determining the associated demand for vehicles. We analyze all bus routes in a rural area in contrast to other work that predicts just a single bus route. Some differences in bus routes in rural areas compared to cities are highlighted and substantiated by a case study data using Roding, a town in the rural district of Cham in northern Bavaria, as an example. Data collected and we selected a random forest model that lets us determine the passenger demand, bus line effectiveness, or general user behavior. The prediction accuracy of the selected model is currently 87%. The collected data helps to build new mobility-as-a-service solutions, such as on-call buses or dynamic route optimizations, as we show with our simulation.
In this paper, we present a new approach to determine the estimated time of arrival (ETA) for bus routes using (Deep) Graph Convolutional Networks (DGCNs). In addition we use the same DGCN to detect detours within a route. In our application, a classification of routes and their underlying graph structure is performed using Graph Learning. Our model leads to a fast prediction and avoids solving the vehicle routing problem (VRP) through expensive computations. Moreover, we describe how to predict travel time for all routes using the same DGCN Model. This method makes it possible not to use a more computationally intensive approximation algorithm when determining long travel times with many intermediate stops, but to use our network for an early estimate of the quality of a route. Long travel times, in our case result from the use of a call-bus system, which must distribute many passengers among several vehicles and can take them to places without a regular stop. For a case study, the rural town of Roding in Bavaria is used. Our training data for this area results from an approximation algorithm that we implemented to optimize routes, and to generate an archive of routes of varying quality simultaneously.
Shadow IT and Business-managed IT describe the autonomous deployment/procurement or management of Information Technology (IT) instances, i.e., software, hardware, or IT services, by business entities. For Shadow IT, this happens covertly, i.e., without alignment with the IT organization; for Business-managed IT this happens overtly, i.e., in alignment with the IT organization or in a split responsibility model. We conduct a systematic literature review and structure the identified research themes in a framework of causing factors, outcomes, and governance. As causing factors, we identify enablers, motivators, and missing barriers. Outcomes can be benefits as well as risks/shortcomings of Shadow IT and Business-managed IT. Concerning governance, we distinguish two subcategories: general governance for Shadow IT and Business-managed IT and instance governance for overt Business-managed IT. Thus, a specific set of governance approaches exists for Business-managed IT that cannot be applied to Shadow IT due to its covert nature. Hence, we extend the existing conceptual understanding and allocate research themes to Shadow IT, Business-managed IT, or both concepts and particularly distinguish the governance of the two concepts. Besides, we find that governance themes have been the primary research focus since 2016, whereas older publications (until 2015) focused on causing factors.
Quantum computing is a relatively new paradigm that has raised considerable interest in physics and computer science in general but has so far received little attention in software engineering and architecture. Hybrid applications that consist of both quantum and classical components require the development of appropriate quantum software architectures. However, given that quantum software engineering (QSE) in general is a new research area, quantum software architecture–a subresearch area in QSE is also understudied. The goal of this chapter is to provide a list of research challenges and opportunities for such architectures. In addition, to make the content understandable to a broader computer science audience, we provide a brief overview of quantum computing and explain the essential technical foundations.
Ziel der Studie:
Ziel der Studie ist die Messung des Stands der Digitalisierung und die mit einer Anbindung an die Telematikinfrastruktur verbundenen Chancen und Herausforderungen für Rehabilitationseinrichtungen.
Methodik:
Teilstandardisierte Online-Befragung bei Trägern von Rehabilitationseinrichtungen in Bayern (n=33). Der Fragebogen mit 36 Fragen beinhaltet eine leicht veränderte Skala auf Basis des „Electronic Medical Record Adoption Model (EMRAM)“.
Ergebnisse:
Der Digitalisierungsgrad wurde in 70 Prozent der Rehabilitationseinrichtungen mit Stufe 0 angegeben (Stufenmodell bis 7). Die Übermittlung patientenbezogener Daten (Eingang und Ausgang) erfolgt häufig analog, wohingegen die Verarbeitung innerhalb der Einrichtung in vielen Fällen bereits überwiegend digital ist. Beim Anschluss an die Telematikinfrastruktur wird hoher Aufwand bei der Installation, aber auch der Schulung des Personals und der Anpassung der Arbeitsorganisation gesehen.
Schlussfolgerung:
Durch Änderung der gesetzlich-finanziellen Lage in Deutschland eröffnen sich für Rehabilitationseinrichtungen neue Möglichkeiten einer verstärkten Digitalisierung. Hürden hängen mit Anforderungen an IT-Sicherheit, Schulung des Personals und sowie dem ebenfalls geringen Digitalisierungsstand bei Krankenhäusern und Ärzt*innen sowie Patient*innen zusammen, die eine digitale Datenübermittlung erschweren.
Lean Management is a standard production mode that has been familiar to production organizations for several decades. To date, however, academic literature has presented surprisingly little information about the application of Lean Management in Information Technology (IT) organizations, or what is called Lean IT. Drawing upon an empirical qualitative case study of the IT departments of two multinational companies, in this paper we identify change management lessons learned for Lean IT implementations, as well as seven characteristics of a corresponding change management approach. As an extension of our work, researchers should validate and expand our initial findings, preferably in a quantitative setting.
This paper introduces a novel chaotic flower pollination algorithm (CFPA) to solve a tardiness-constrained flow-shop scheduling problem with simultaneously loaded stations. This industrial manufacturing problem is modeled from a filter basket production line in Germany and has been generally solved using standard deterministic algorithms. This research develops a metaheuristic approach based on the highly efficient flower pollination algorithm coupled with different chaos maps for stochasticity. The objective function targeted is the tardiness constraint of the due dates. Fifteen different experiments with thirty scenarios are generated to mimic industrial conditions. The results are compared with the genetic algorithm and with the four standard benchmark priority rule-based deterministic algorithms of First In First Out, Raghu and Rajendran, Shortest Processing Time and Slack. From the obtained results and analysis of the relative difference, percentage relative difference and t tests, CFPA was found to be significantly better performing than the deterministic heuristics and the GA algorithm.
We consider the reconstruction problem for limited angle tomography using filtered backprojection (FBP) and lambda tomography. We use microlocal analysis to explain why the well-known streak artifacts are present at the end of the limited angular range. We explain how to mitigate the streaks and prove that our modified FBP and lambda operators are standard pseudodifferential operators, and so they do not add artifacts. We provide reconstructions to illustrate our mathematical results.
Classifying Developers into Core and Peripheral: An Empirical Study on Count and Network Metrics
(2017)
Knowledge about the roles developers play in a software project is crucial to understanding the project's collaborative dynamics. In practice, developers are often classified according to the dichotomy of core and peripheral roles. Typically, count-based operationalizations, which rely on simple counts of individual developer activities (e.g., number of commits), are used for this purpose, but there is concern regarding their validity and ability to elicit meaningful insights. To shed light on this issue, we investigate whether count-based operationalizations of developer roles produce consistent results, and we validate them with respect to developers' perceptions by surveying 166 developers. Improving over the state of the art, we propose a relational perspective on developer roles, using fine-grained developer networks modeling the organizational structure, and by examining developer roles in terms of developers' positions and stability within the developer network. In a study of 10 substantial open-source projects, we found that the primary difference between the count-based and our proposed network-based core-peripheral operationalizations is that the network-based ones agree more with developer perception than count-based ones. Furthermore, we demonstrate that a relational perspective can reveal further meaningful insights, such as that core developers exhibit high positional stability, upper positions in the hierarchy, and high levels of coordination with other core developers, which confirms assumptions of previous work.
Clayworks
(2006)
Clayworks is a software system which integrates collaborative real-time modeling and distributed computing. It addresses the challenge of developing a collaborative workspace with a seamless access to high-performance servers. Clayworks allows modeling of virtual clay objects and running computation-intensive deformation simulations for objects crashing into each other. To integrate heterogeneous computational resources, we adopted modern Grid middleware and provided the users with an intuitive graphical interface. We parallelized the computation of simulations using a Higher-Order Component (HOC) which abstracts over the Globus Web service resource framework (WSRF) used to interconnect our worksuite to the computation server. Clayworks is a representative of a large class of demanding systems which combine collaborative modeling with performance-critical computations, e.g., crash-tests or simulations for biological population evolution.
We consider the development of software systems that integrate collaborative real-time modeling and distributed computing. Our main goal is user-orientation: we need a collaborative workspace for geographically dispersed users with a seamless access of every user to high-performance servers. This paper presents a particular prototype, Clayworks, that allows modeling of virtual clay objects and running computation-intensive deformation simulations for objects crashing into each other. In order to integrate heterogeneous computational resources, we adopt modern Grid middleware and provide the users with an intuitive graphical interface. Simulations are parallelized using a higher-order component (HOC) which abstracts over the web service resource framework (WSRF) used to interconnect our worksuite to the computation server. Clayworks is a representative of a large class of demanding systems which combine collaborative, user-oriented modeling with performance-critical computations, e.g., crash-tests or simulations of biological population evolution.
Statistische Eigenschaften natürlicher Grauwerttexturen werden mit Co-Occurrence Matrizen, basierend auf der Grauwertstatistik zweiter Ordnung, modelliert. Die Matrix gibt dann die apriori Wahrscheinlichkeiten aller Grauwertpaare an. Da in der medizinischen Bildverarbeitung verstärkt Multispektralbilder ausgewertet werden, wird das bekannte Konzept hier auf beliebige Vektorbilder erweitert. Dadurch kann bei der Texturklassifikation die zur Verfügung stehende Information vollständig genutzt werden. Insbesondere zur Detektion von Farbtexturen ist dieser Ansatz geeignet, da Wertepaare unterschiedlicher Spektralebenen ausgewertet werden können. Ebenso kann die Methode auch bei der Multiskalendekomposition von Intensitätsbildern zur Verbesserung der Texturerkennung beitragen. Die in den Matrizen entstehenden Muster lassen dann über die Extraktion geeigneter Texturdeskriptoren Rückschlüsse auf die Texturen des Bildes zu.
Metadata management constitutes a key prerequisite for enterprises as they engage in data analytics and governance. Today, however, the context of data is often only manually documented by subject matter experts, and lacks completeness and reliability due to the complex nature of data pipelines. Thus, collecting data lineage—describing the origin, structure, and dependencies of data—in an automated fashion increases quality of provided metadata and reduces manual effort, making it critical for the development and operation of data pipelines. In our practice report, we propose an end-to-end solution that digests lineage via (Py‑)Spark execution plans. We build upon the open-source component Spline, allowing us to reliably consume lineage metadata and identify interdependencies. We map the digested data into an expandable data model, enabling us to extract graph structures for both coarse- and fine-grained data lineage. Lastly, our solution visualizes the extracted data lineage via a modern web app, and integrates with BMW Group’s soon-to-be open-sourced Cloud Data Hub.
The estimation of illuminant color is mandatory for many applications in the field of color image quantification. However, it is an unresolved problem if no additional heuristics or restrictive assumptions apply. Assuming uniformly colored and roundly shaped objects, Lee has presented a theory and a method for computing the scene-illuminant chromaticity from specular highlights [H. C. Lee, J. Opt. Soc. Am. A 3, 1694 (1986)]. However, Lee’s method, called image path search, is less robust to noise and is limited in the handling of microtextured surfaces. We introduce a novel approach to estimate the color of a single illuminant for noisy and microtextured images, which frequently occur in real-world scenes. Using dichromatic regions of different colored surfaces, our approach, named color line search, reverses Lee’s strategy of image path search. Reliable color lines are determined directly in the domain of the color diagrams by three steps. First, regions of interest are automatically detected around specular highlights, and local color diagrams are computed. Second, color lines are determined according to the dichromatic reflection model by Hough transform of the color diagrams. Third, a consistency check is applied by a corresponding path search in the image domain. Our method is evaluated on 40 natural images of fruit and vegetables. In comparison with those of Lee’s method, accuracy and stability are substantially improved. In addition, the color line search approach can easily be extended to scenes of objects with macrotextured surfaces.
Color Texture Analysis of Moving Vocal Cords Using Approaches from Statistics and Signal Theory
(2000)
Textural features are applied for detection of morphological pathologies of vocal cords. Cooccurrence matrices as statistical features are presented as well as filter bank analysis by Gabor filters. Both methods are extended to handle color images. Their robustness against camera movement and vibration of vocal cords is evaluated. Classification results due to three in vivo sequences are in between 94.4 % and 98.9%. The classification errors decrease if color features are used instead of grayscale features for both statistical and Fourier features
Integrative Co-occurrence matrices are introduced as novel features for color texture classification. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information profit of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classification experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classification results up to 20% and 32% for the first and second baseline, respectively.
We present two methods that combine image reconstruction and edge detection in computed tomography (CT) scans. Our first method is as an extension of the prominent filtered backprojection algorithm. In our second method we employ ℓ1-regularization for stable calculation of the gradient. As opposed to the first method, we show that this approach is able to compensate for undersampled CT data.
Within many real-world networks, the links between pairs of nodes change over time. Thus, there has been a recent boom in studying temporal graphs. Recognizing patterns in temporal graphs requires a proximity measure to compare different temporal graphs. To this end, we propose to study dynamic time warping on temporal graphs. We define the dynamic tem- poral graph warping (dtgw) distance to determine the dissimilarity of two temporal graphs. Our novel measure is flexible and can be applied in various application domains. We show that computing the dtgw-distance is a challenging (in general) NP-hard optimization problem and identify some polynomial-time solvable special cases. Moreover, we develop a quadratic programming formulation and an efficient heuristic. In experiments on real-world data, we show that the heuristic performs very well and that our dtgw-distance performs favorably in de-anonymizing networks compared to other approaches.
Smart grid, smart metering, electromobility, and the regulation of the power network are keywords of the transition in energy politics. In the future, the power grid will be smart. Based on different works, this article presents a data collection, analyzing, and monitoring software for a reference smart grid. We discuss two possible architectures for collecting data from energy analyzers and analyze their performance with respect to real-time monitoring, load peak analysis, and automated regulation of the power grid. In the first architecture, we analyze the latency, needed bandwidth, and scalability for collecting data over the Modbus TCP/IP protocol and in the second one over a RESTful web service. The analysis results show that the solution with Modbus is more scalable as the one with RESTful web service. However, the performance and scalability of both architectures are sufficient for our reference smart grid and use cases.
With examples concerning the development and dissemination of computer technology in the Soviet Union, the U.S., and other Western countries it shall be demonstrated that computer development on the one hand
and social change as well as changes in policy making and administration on the other hand are mingled with each other without a clear direction of causation being discernible.It also shall be shown that perceived social and political threats imposed by early computer technology sometimes actually helped to stop or at least slow down social change.
One conclusion that can be drawn from the case studies described for RRI is that the conscious steering of innovations fails because of diffuse and uncoordinated resistance from very different stakeholders. The case studies also suggest that the effectiveness of RRI might be rather limited.
This paper examines the conceptualization of sustainability in the context of information and communication technology (ICT) research. Through an inductive text analysis of sixteen literature reviews spanning from 2014 to 2023, key themes and concepts are identified, highlighting the complex relationship between ICT and sustainability. ICT is perceived both as an enabler and a problem for sustainability. Furthermore, the terminology and concept of sustainability in the context of ICT remain unclear. The emergence of digitalization as a novel socio-technical phenomenon poses additional challenges for conceptual alignment. While a holistic view of sustainability in ICT is desired, business and social implications receive less attention. The paper summarizes and discusses the developments in research on this topic over the past decade.
We demonstrate ControVol Flex, an Eclipse plugin for controlled schema evolution in Java applications backed by NoSQL document stores. The sweet spot of our tool are applications that are deployed continuously against the same production data store: Each new release may bring about schema changes that conflict with legacy data already stored in production. The type system internal to the predecessor tool ControVol is able to detect common schema conflicts, and enables developers to resolve them with the help of object-mapper annotations. Our new tool ControVol Flex lets developers choose their schema-migration strategy, whether all legacy data is to be migrated eagerly by means of NotaQL transformation scripts, or lazily, as declared by object-mapper annotations. Our tool is even capable of carrying out both strategies in combination, eagerly migrating data in the background, while lazily migrating data that is meanwhile accessed by the application. From the viewpoint of the application, it remains transparent how legacy data is migrated: Every read access yields an entity that matches the structure that the current application code expects. Our live demo shows how ControVol Flex gracefully solves a broad range of common schema-evolution tasks.
Building scalable web applications on top of NoSQL data stores is becoming common practice. Many of these data stores can easily be accessed programmatically, and do not enforce a schema. Software engineers can design the data model on the go, a flexibility that is crucial in agile software development. The typical tasks of database schema management are now handled within the application code, usually involving object mapper libraries. However, today’s Integrated Development Environments (IDEs) lack the proper tool support when it comes to managing the combined evolution of the application code and of the schema. Yet simple refactorings such as renaming an attribute at the source code level can cause irretrievable data loss or runtime errors once the application is serving in production. In this demo, we present ControVol, a framework for controlled schema evolution in application development against NoSQL data stores. ControVol is integrated into the IDE and statically type checks object mapper class declarations against the schema evolution history, as recorded by the code repository. ControVol is capable of warning of common yet risky cases of mismatched data and schema. ControVol is further able to suggest quick fixes by which developers can have these issues automatically resolved.
In building software-as-a-service applications, a flexible development environment is key to shipping early and often. Therefore, schema-flexible data stores are becoming more and more popular. They can store data with heterogeneous structure, allowing for new releases to be pushed frequently, without having to migrate legacy data first. However, the current application code must continue to work with any legacy data that has already been persisted in production. To let legacy data structurally "catch up" with the latest application code, developers commonly employ object mapper libraries with life-cycle annotations. Yet when used without caution, they can cause runtime errors and even data loss. We present ControVol, an IDE plugin that detects evolutionary changes to the application code that are incompatible with legacy data. ControVol warns developers already at development time, and even suggests automatic fixes for lazily migrating legacy data when it is loaded into the application. Thus, ControVol ensures that the structure of legacy data can catch up with the structure expected by the latest software release.
In this paper we describe and evaluate an implementation of CPU-style SIMD ray traversal on the GPU. We show how spreading moderately wide BVHs (up to a branching factor of eight) across multiple threads in a warp can improve performance while not requiring expensive pre-processing. The presented ray-traversal method exhibits improved traversal performance especially for increasingly incoherent rays.
Business process improvement (BPI) is of high priority for practitioners. But especially the most value-adding phase in a BPI project, namely the “act of improvement”, is insufficiently supported despite the many existing methods and techniques. Until now, it is largely unclear as to what degree existing BPI techniques support each other and are interrelated with one another. Thus, the purpose of this paper is to investigate the functional interdependencies between BPI techniques to get a better understanding for the beneficial synergies between the BPI techniques and to provide a basis for purposefully combining them within projects. Based on the functional interdependencies, a graphical “Functional Interdependency Map” is developed and its usability demonstrated in an experiment. The paper is valuable for academics and practitioners alike because the impact of BPI on organizational performance is high.
Building applications for processing data lakes is a software engineering challenge. We present Darwin, a middleware for applications that operate on variational data. This concerns data with heterogeneous structure, usually stored within a schema-flexible NoSQL database. Darwin assists application developers in essential data and schema curation tasks: Upon request, Darwin extracts a schema description, discovers the history of schema versions, and proposes mappings between these versions. Users of Darwin may interactively choose which mappings are most realistic. Darwin is further capable of rewriting queries at runtime, to ensure that queries also comply with legacy data. Alternatively, Darwin can migrate legacy data to reduce the structural heterogeneity. Using Darwin, developers may thus evolve their data in sync with their code. In our hands-on demo, we curate synthetic as well as real-life datasets.
This paper empirically examines the current state of the IS offshoring phenomenon in Germany regarding project characteristics and success patterns. Relying on a sample of 304 projects conducted at various industry sectors and companies, results show that IS offshoring primarily occurs in sectors Telecommunications and IT at large corporations. Cost reduction is the main reason for going offshore and offshore projects are executed as part of a larger program at companies. Noticeably, most projects are delivered from India. Additionally, neither captive offshoring nor offshore outsourcing dominates as a de-livery option. Comparing different project subgroups regarding project success, the results reveal that projects delivered by an internal or partially owned service provider are more successful. Other project characteristics such as a project’s embedded-ness in a larger offshoring program, a project’s size, or a project’s offshoring degree in terms of relatively offshored labor hours show few significant differences. The paper addresses the paucity of empirical research on the current state of the IS offshoring phenomenon in Germany.
Differential phase contrast imaging (DPCI) enables the visualization of soft tissue contrast using X-rays. In this work we introduce a reconstruction framework based on curvelet expansion and sparse regularization for DPCI. We will show that curvelets provide a suitable data representation for DPCI reconstruction that allows preservation of edges as well as an exact analytic representation of the system matrix. As a first evaluation, we show results using simulated phantom data
Cybersecurity in health
(2019)
Purpose
Cybersecurity in healthcare has become an urgent matter in recent years due to various malicious attacks on hospitals and other parts of the healthcare infrastructure. The purpose of this paper is to provide an outline of how core values of the health systems, such as the principles of biomedical ethics, are in a supportive or conflicting relation to cybersecurity.
Design/methodology/approach
This paper claims that it is possible to map the desiderata relevant to cybersecurity onto the four principles of medical ethics, i.e. beneficence, non-maleficence, autonomy and justice, and explore value conflicts in that way.
Findings
With respect to the question of how these principles should be balanced, there are reasons to think that the priority of autonomy relative to beneficence and non-maleficence in contemporary medical ethics could be extended to value conflicts in health-related cybersecurity.
Research limitations/implications
However, the tension between autonomy and justice, which relates to the desideratum of usability of information and communication technology systems, cannot be ignored even if one assumes that respect for autonomy should take priority over other moral concerns.
Originality/value
In terms of value conflicts, most discussions in healthcare deal with the conflict of balancing efficiency and privacy given the sensible nature of health information. In this paper, the authors provide a broader and more detailed outline.
Cybersecurity in Health Care
(2020)
Ethical questions have always been crucial in health care; the rapid dissemination of ICT makes some of those questions even more pressing and also raises new ones. One of these new questions is cybersecurity in relation to ethics in health care. In order to more closely examine this issue, this chapter introduces Beauchamp and Childress’ four principles of biomedical ethics as well as additional ethical values and technical aims of relevance for health care. Based on this, two case studies—implantable medical devices and electronic Health Card—are presented, which illustrate potential conflicts between ethical values and technical aims as well as between ethical values themselves. It becomes apparent that these conflicts cannot be eliminated in general but must be reconsidered on a case-by-case basis. An ethical debate on cybersecurity regarding the design and implementation of new (digital) technologies in health care is essential.