Fakultät Informatik und Mathematik
Refine
Year of publication
- 2019 (96) (remove)
Document Type
- conference proceeding (article) (41)
- Article (32)
- conference proceeding (presentation, abstract) (10)
- Part of a Book (5)
- conference talk (2)
- conference proceeding (volume) (2)
- Moving Images (2)
- Book (1)
- Preprint (1)
Is part of the Bibliography
- no (96)
Keywords
- Maschinelles Lernen (6)
- Outsourcing (5)
- Produktionsplanung (5)
- Betriebliches Informationssystem (3)
- Business-managed IT (3)
- Diagnose (3)
- Handchirurgie (3)
- Lernprogramm (3)
- Literaturbericht (3)
- Machine learning (3)
Institute
- Fakultät Informatik und Mathematik (96) (remove)
Begutachtungsstatus
- peer-reviewed (58)
- begutachtet (1)
ELSI-Begleitforschung entwickelt sich zunehmend zu einem integralen Bestandteil von Forschungs- und Entwicklungsprojekten – nicht zuletzt dadurch, dass Drittmittelgeber wie die EU oder das BMBF in vielen Förderlinien explizit fordern, dass Technik wertebasiert entwickelt bzw., responsible research and innovation‘ (RRI) betrieben wird. Leitend ist hierbei der Gedanke, dass die Gestaltung von Technik alle Stakeholder-Interessen berücksichtigen soll; die Ermittlung dieser Interessen sowie die moralisch fundierte Balance zwischen widerstreitenden Interessen soll durch partizipative Verfahren erreicht werden. Dazu eignen sich ethische Leitlinien oder andere Kodifizierungen von Normen und Werten nur bedingt; in der Praxis werden anwendbare Verfahren benötigt. In den letzten Jahren wurden solche Verfahren entwickelt und bereits in der F&E-Praxis erprobt und eingesetzt. Drei (diskursethisch basierte) Verfahren, die kombiniert werden können, sollen vorgestellt werden.
SQL-on-Hadoop processing engines have become state-of-the-art in data lake analysis. However, the skills required to tune such systems are rare. This has inspired automated tuning advisors which profile the query workload and produce tuning setups for the low-level MapReduce jobs. Yet with highly dynamic query workloads, repeated re-tuning costs time and money in IaaS environments. In this paper, we focus on reducing the costs for up-front tuning. At the heart of our approach is the observation that a SQL query is compiled into a query plan of MapReduce jobs. While the plans differ from query to query, single jobs tend to be similar between queries. We introduce the notion of the code signature of a MapReduce job and, based on this, our concept of job similarity. We show that we can effectively recycle tuning setups from similar MapReduce jobs already profiled. In doing so, we can leverage any third-party tuning adviser for MapReduce engines. We are able to show that by recycling tuning setups, we can reduce the time spent on profiling by 50% in the TPC-H benchmark.
This article revisits an analysis on (in)accuracies of time series averaging under dynamic time warping (dtw) conducted by Niennattrakul and Ratanamahatana [16]. They proposed a correctness-criterion for dtw-averages and postulated that dtw-averages can drift out of the cluster of time series to be averaged. They claimed that dtw-averages are inaccurate if they violate the correctness-criterion or suffer from the drift-out phenomenon. Furthermore, they conjectured that such inaccuracies are caused by the lack of triangle inequality. In this article, we show that a rectified version of the correctness-criterion is unsatisfiable and that the concept of drift-out is geometrically and operationally inconclusive. Satisfying the triangle inequality is insufficient to achieve correctness and unnecessary to overcome the drift-out phenomenon. We place the concept of drift-out on a principled basis and show that Fréchet means never drift out. The adjusted drift-out is a way to test to which extent an approximated dtw-average is coherent. Empirical results show that approximations obtained by the state-of-the-art averaging methods are incoherent in over a third of all cases.
The sample mean is one of the most fundamental concepts in statistics with far-reaching implications for data mining and pattern recognition. Household load profiles are compared to the aggregated levels more intermittent and a specific error measure based on local permutations has been proposed to cope with this when comparing profiles. We formally describe a distance based on this error, the local permutation invariant (LPI) distance, and introduce the sample mean problem in the LPI space. An existing exact solution has exponential complexity and is only tractable for very few profiles. We propose three subgradient-based approximation algorithms and compare them empirically on 100 households of the CER dataset. We find that stochastic subgradient descent can approximate the mean best, while the majorize-minimize mean is a good compromise for applications as no hyperparameter-tuning is needed. We show how the algorithms can be used in forecasting and clustering to achieve more appropriate results than by using the arithmetic mean.
The literature postulates that the dynamic time warping (dtw) distance can cope with temporal variations but stores and processes time series in a form as if the dtw-distance cannot cope with such variations. To address this inconsistency, we first show that the dtw-distance is not warping-invariant—despite its name and contrary to its characterization in some publications. The lack of warping-invariance contributes to the inconsistency mentioned above and to a strange behavior. To eliminate these peculiarities, we convert the dtw-distance to a warping-invariant semi-metric, called time-warp-invariant (twi) distance. Empirical results suggest that the error rates of the twi and dtw nearest-neighbor classifier are practically equivalent in a Bayesian sense. However, the twi-distance requires less storage and computation time than the dtw-distance for a broad range of problems. These results challenge the current practice of applying the dtw-distance in nearest-neighbor classification and suggest the proposed twi-distance as a more efficient and consistent option.
Averaging time series under dynamic time warping is an important tool for improving nearest-neighbor classifiers and formulating centroid-based clustering. The most promising approach poses time series averaging as the problem of minimizing a Fréchet function. Minimizing the Fréchet function is NP-hard and so far solved by several heuristics and inexact strategies. Our contributions are as follows: we first discuss some inaccuracies in the literature on exact mean computation in dynamic time warping spaces. Then we propose an exponential-time dynamic program for computing a global minimum of the Fréchet function. The proposed algorithm is useful for benchmarking and evaluating known heuristics. In addition, we present an exact polynomial-time algorithm for the special case of binary time series. Based on the proposed exponential-time dynamic program, we empirically study properties like uniqueness and length of a mean, which are of interest for devising better heuristics. Experimental evaluations indicate substantial deficits of state-of-the-art heuristics in terms of their output quality.
In diesem Talk blicken Prof. Dr. Wolfgang Mauerer und Ralf Ramsauer unter die Haube des verteilten Versionskontrollsystems Git. Neben einer genauen Beschreibung der Strukturen und Plumbing APIs, mit denen Git intern Commits erzeugt und verknüpft, gehen die Vortragenden auch auf nützliche Features und Standards ein, welche die Kollaborition in großen Open-Source Projekten erleichtern.
As a result of the enormous growth in data traffic for autonomous driving, the conventional in-vehicle network is no longer sufficient and requires new types of network concepts in a vehicle. This part of the automobile is known as the next generation communication network. Since the new car-systems can be extended by various services at any time, the network must adapt dynamically to new requirements wherever possible. For example, data flow must be configured dynamically between new services. Also data rates will be much higher in the future than today. This is one of the main reasons why we need to search for new technologies for data transfer in vehicles. This is based on an in-vehicle ethernet network. The process of configuring networks automatically has been discussed several times in recent years. One of the next steps is verifying and validating the automatic configuration process during the development of the new communication network. This research paper identifies several ways to ensure the automatically generated network configuration leads to a secure system. To achieve that, other parts of the company’s enterprise IT architecture and network technologies, the conventional vehicle network and other options for verification and validation are analysed
The Spoken Wikipedia Corpus collection: Harvesting, alignment and an application to hyperlistening
(2019)
Spoken corpora are important for speech research, but are expensive to create and do not necessarily reflect (read or spontaneous) speech ‘in the wild’. We report on our conversion of the preexisting and freely available Spoken Wikipedia into a speech resource. The Spoken Wikipedia project unites volunteer readers of Wikipedia articles. There are initiatives to create and sustain Spoken Wikipedia versions in many languages and hence the available data grows over time. Thousands of spoken articles are available to users who prefer a spoken over the written version. We turn these semi-structured collections into structured and time-aligned corpora, keeping the exact correspondence with the original hypertext as well as all available metadata. Thus, we make the Spoken Wikipedia accessible for sustainable research. We present our open-source software pipeline that downloads, extracts, normalizes and text–speech aligns the Spoken Wikipedia. Additional language versions can be exploited by adapting configuration files or extending the software if necessary for language peculiarities. We also present and analyze the resulting corpora for German, English, and Dutch, which presently total 1005 h and grow at an estimated 87 h per year. The corpora, together with our software, are available via http://islrn.org/resources/684-927-624-257-3/. As a prototype usage of the time-aligned corpus, we describe an experiment about the preferred modalities for interacting with information-rich read-out hypertext. We find alignments to help improve user experience and factual information access by enabling targeted interaction.
Speech-based interactive systems, such as virtual personal assistants, inevitably use complex architectures, with a multitude of modules working in series (or less often in parallel) to perform a task (e.g., giving personalized movie recommendations via dialog). Add modules for evoking and sustaining sociability with the user and the accumulation of processing latencies through the modules results in considerable turn-taking delays. We introduce incremental speech processing into the generation pipeline of the system to overcome this challenge with only minimal changes to the system architecture, through partial underspecification that is resolved as necessary. A user study with a sociable movie recommendation agent objectively diminishes turn-taking delays; furthermore, users not only rate the incremental system as more responsive, but also rate its recommendation performance as higher.
Ellipses denote the omission of one or more grammatically necessary phrases. In this paper, we will demonstrate how to identify such ellipses as a rhythmical pattern in modern and postmodern free verse poetry by using data from lyrikline which contain the corresponding audio recording of each poem as spoken by the original author. We present a feature engineering approach based on literary analysis as well as a neural networks based approach for the identification of ellipses within the lines of a poem. A contrast class to the ellipsis is defined from poems consisting of complete and correct sentences. The feature-based approach used features derived from a parser such as verb, comma, and sentence ending punctuation. The classifier of neural networks is trained on the line level to integrate the textual information, the spoken recitation, and the pause information between lines, and to integrate information across the lines within the poem. A statistic analysis of poet's gender showed that 65% of all elliptical poems were written by female poets. The best results, calculated by the weighted F-measure, for the classification of ellipsis with the contrast class is 0.94 with the neural networks based approach. The best results for classification of elliptical lines is 0.62 with the feature-based approach.
This work aims to discern the poetics of concrete poetry by using a corpus-based classification focusing on the two most important techniques used within concrete poetry: semantic decomposition and syntactic permutation. We demonstrate how to identify concrete poetry in modern and postmodern free verse. A class contrasting to concrete poetry is defined on the basis of poems with complete and correct sentences. We used the data from lyrikline, which contain both the written as well as the spoken form of poems as read by the original author. We explored two approaches for the identification of concrete poetry. The first is based on the definition of concrete poetry in literary theory by the extraction of various types of features derived from a parser, such as verb, noun, comma, sentence ending, conjunction, and asemantic material. The second is a neural network-based approach, which is theoretically less informed by human insight, as it does not have access to features established by scholars. This approach used the following inputs: textual information and the spoken recitation of poetic lines as well as information about pauses between lines. The results based on the neural network are more accurate than the feature-based approach. The best results, calculated by the weighted F-measure, for the classification of concrete poetry vis-à-vis the contrasting class is 0.96
Translation systems aim to perform a meaning-preserving conversion of linguistic material (typically text but also speech) from a source to a target language (and, to a lesser degree, the corresponding socio-cultural contexts). Dubbing, i.e., the lip-synchronous translation and revoicing of speech adds to this constraints about the close matching of phonetic and resulting visemic synchrony characteristics of source and target material. There is an inherent conflict between a translation’s meaning preservation and ‘dubbability’ and the resulting trade-off can be controlled by weighing the synchrony constraints. We introduce our work, which to the best of our knowledge is the first of its kind, on integrating synchrony constraints into the machine translation paradigm. We present first results for the integration of synchrony constraints into encoder decoder-based neural machine translation and show that considerably more ‘dubbable’ translations can be achieved with only a small impact on BLEU score, and dubbability improves more steeply than BLEU degrades.
Data-based analyses are becoming more and more common in the Digital Humanities and tools are needed that focus human efforts on the most interesting and important aspects of exploration, analysis and annotation by using active machine learning techniques. We present our ongoing work on a tool that supports classification tasks for spoken documents (in our case: read-out post-modern poetry) using a neural networks-based classification backend and a web-based exploration and classification environment.
The following article considers the need to integrate social aspects into the Master Production Scheduling. This is justified by the demand for sustainable business processes and the previously neglected social dimension, which is also reflected in the development of working conditions. The linear optimization model for Master Production Scheduling outlined in connection with aspects of Human Resource Requirements Planning offers an approach to reduce this research gap and underlines the urgency of long-term planning and control of employee burdens. For companies, this results in a Decision Support System through the evaluation of measures to improve working conditions.
More than 30 years after its first implementation, IT outsourcing (ITO) is unanimously considered a critical component of corporate strategy for private and public institutions alike. While implementations of ITO around the world share some common characteristics like typical reasons for outsourcing, key success factors, or dimensions along which they can be classified, extant research also points to regional differences. However, research on this topic, specifically regarding pivotal contract features like contract value, contract length, or pricing methods, is still in its infancy, and quantitative analyses on the subject are particularly scarce. We address this research gap by analyzing data on 14,917 ITO contracts closed between 2007 and 2017 through the lens of cultural regions and three statistical methods. The contribution of our paper is threefold. First, our descriptive analysis points to globally decreasing contract lengths and contract values, confirming previous studies and practice reports. Second, an ANOVA with independent post-hoc testing provides quantitative support for the degree of dissimilarity among individual regions in pivotal ITO contract features. Finally, our quantitative replication of a previous study identifies culture-induced regional differences between USA and Japan regarding the effect of influence factors on ITO contract features
Der Eigenständigkeitsanspruch der Maschinenethik steht und fällt mit der Frage, ob man autonome Maschinen als moralische Agenten betrachten kann. Zur Beantwortung dieser Frage wird untersucht, unter welchen Voraussetzungen man Maschinen als moralische Agenten betrachten und ob man Maschinen Autonomie und Verantwortungsfähigkeit zusprechen kann. Die Autoren kommen zu dem Schluss, dass es in absehbarer Zukunft zwar keine „moralische Maschinen“ geben mag, es aber Aufgabe der Maschinenethik sein sollte, Maschinen so zu gestalten, dass sie als quasi-moralische Akteure akzeptiert werden können. Dabei dürfen jedoch jene menschlichen Akteure, die entsprechende Maschinen gestalten, entwickeln, bauen und nutzen, nicht aus ihrer Verantwortung entlassen werden.
A basic task in the design of a robotic production cell is the relative placement of robot and workpiece. The fundamental requirement is that the robot can reach all process positions; only then one can think further optimization. Therefore an algorithm that automatically places an object into the workspace is very desirable. However many iterative optimization algorithms cannot guarantee that all intermediate steps are reachable, resulting in complicated procedures. We present a novel approach which extends a robot by a virtual prismatic joint - which measures the distance to the workspace - such that any TCP frames are reachable. This allows higher order nonlinear programming algorithms to be used for placement of an object alone as well as the optimal placement under some differentiable criterion.
Einleitung:
Die ubiquitäre Durchdringung mit digitaltechnischen Applikationen und den damit einhergehenden Einflüssen auf organisationale Strukturen, führt insbesondere im hochsensiblen medizinischen Anwendungsbereich zur Notwendigkeit, einer aktiven Implementierungsgestaltung. Das zwischen den omnipräsenten Schlagwörtern Big Data (Analytics), Smart Technologies, Internet of Things oder Künstliche Intelligenz (KI) bestehende und für etwaige Implementierungsabwägungen bedeutsame korrelative Verhältnis, ist dabei nicht immer offensichtlich: Viele konzeptionelle Überlegungen aus früheren Ansätzen der KI manifestieren sich in den verschiedenen Digitalisierungstrends und Entwicklungstendenzen.
Methode:
Die intensive theoretische Auseinandersetzung mit Entwicklungslinien und inhaltlicher Reflektion von begleitenden Diskursen, kann im Rahmen der Auseinandersetzung mit KI-basierten Anwendungen und den An- wie auch Herausforderungen an deren Implementierung einen Weg eröffnen, ethische und soziale Aspekte heutiger KI-Entwicklungen (im Vortrag am Beispiel der KI-Nutzung in der medizinischen Diagnostik) zumindest teilweise daraus abzuleiten, besser zu verstehen und gegebenenfalls um neue Faktoren zu ergänzen.
Ergebnisse:
Die Gegenüberstellung früherer und heutiger KI-Debatten leistet dabei einen grundlegenden Beitrag zur Identifikation relevanter Wertebereiche, sowie zur Entwicklung erster vorläufiger Empfehlungen für die Gestaltung und den Einsatz von KI-Systemen in der Praxis und derart deren konstruktive Einbettung in existierende Organisationsstrukturen.
Diskussion:
Neben der Identifikation potentieller Stakeholder und möglicher Interessenslagen, eröffnet ein derart empirisch orientierter Zugang erste Einblicke in die, für die Hervorbringung und Implementierung medizintechnischer Applikationen notwendige normative Landschaft. Um hieraus jedoch, wie im Projekt „KI & Ethik“ angedacht, eine tentative ethische Leitlinien als Rahmengefüge des Einsatzes einer medizintechnischen Anwendung abzuleiten, gilt es ein solches Fundament um jeweilige, aus Stakeholder-Interviews gewonnene Perspektiven auf etwaige An- und Herausforderungen an eine Implementierung, aber auch hinsichtlich deren Nutzungserwartungen zu erweitern.
The German National Educational Panel Study (NEPS) was set up to provide an empirical basis for longitudinal analyses of individuals’ educational careers and competencies and how they unfold over the life course in relation to family, formal educational institutions, and private life. Educational developments and decisions over the life span are being tracked in six starting cohorts as a foundation for characterizing and analyzing educational processes. These six starting cohorts include newborns, Kindergarten children, secondary school children (5th and 9th grade), first-year undergraduate students, and adults. Because access to the target population in several starting cohorts was gained via educational institutions such as Kindergartens and schools, multistage sampling approaches were implemented that reflect the clustered structure of the target populations. Samples in individual contexts, such as those in the adult and newborn cohorts, were established via register-based stratified cluster approaches. This chapter briefly reviews the designs of the implemented sampling strategies for each established starting cohort and provides information on the levels of attrition in the panel development.
A new vision in semantic big data processing is to create enterprise data hubs, with a 360° view on all data that matters to a corporation. As we discuss in this paper, a new generation of multi-model database systems seems a promising architectural choice for building such scalable, non-native triple stores. In this paper, we first characterize this new generation of multi-model databases. Then, discussing an example scenario, we show how they allow for agile and flexible schema management, spanning a large design space for creative and incremental data modelling. We identify the challenge of generating sound triple-views from data stored in several, interlinked models, for SPARQL querying. We regard this as one of several appealing research challenges where the semantic big data and the database architecture community may join forces.
Recently, the semantics of the JSON Schema format, a de-facto standard for JSON schema declarations, has been formalized. It turns out that JSON Schema is a surprisingly complex schema language based on an open document semantics. In this paper, we present a first empirical analysis of a curated collection of real-world JSON Schemas. Knowing what real JSON Schemas are like (to borrow from a title of a related study on DTDs) helps practitioners and researchers in making realistic assumptions when building tools for JSON Schema processing.
Gewichtung
(2019)
Ziel einer Analyse quantitativer Daten ist die Verallgemeinerung der Stichprobenergebnisse auf die interessierende Grundgesamtheit (Häder/Häder, Kapitel 22 in diesem Band). Tatsächlich unterscheiden sich Stichproben in bestimmter Hinsicht aber fast immer von der Grundgesamtheit; sei es durch ein geplantes „Oversampling“ einer bestimmten Teilpopulation (die Genauigkeit einer Schätzung wird im Wesentlichen von der Fallzahl in der Stichprobe bestimmt, weshalb seltene Teilpopulationen, für die valide Schätzungen möglich sein sollen, mit einem größeren Auswahlsatz in die Erhebung aufgenommen werden) oder durch selektiven Nonresponse (Engel/Schmidt, Kapitel 27 in diesem Band). Viele Befragungen weisen etwa einen so genannten „Mittelschichtsbias“ auf; Personen mit mittlerem bis gehobenem Bildungsniveau (gemessen durch den höchsten Schulabschluss) zeigen sich am öftesten bereit, an Umfragen teilzunehmen, sie sind in den Erhebungsdaten daher überrepräsentiert.
We demonstrate MigCast, a tool-based advisor for exploring data migration strategies in the context of developing NoSQL-backed applications. Users of MigCast can consider their options for evolving their data model along with legacy data already persisted in the cloud-hosted production data-base. They can explore alternative actions as the financial costs are predicted respective to the cloud provider chosen. Thereby they are better equipped to assess potential consequences of imminent data migration decisions. To this end, MigCast maintains an internal cost model, taking into account characteristics of the data instance, expected work-load, data model changes, and cloud provider pricing models. Hence, MigCast enables software project stakeholders to remain in control of the operative costs and to make informed decisions evolving their applications.
In this paper, we raise the question how data architects model their data for processing in Apache Hive. This well-known SQL-on-Hadoop engine supports complex value relations, where attribute types need not be atomic. In fact, this feature seems to be one of the prominent selling points, e.g., in Hive reference books. In an empirical study, we analyze Hive schemas in open source repositories. We examine to which extent practitioners make use of complex value relations and accordingly, whether they write queries over complex types. Understanding which features are actively used will help make the right decisions in setting up benchmarks for SQL-on-Hadoop engines, as well as in choosing which query operators to optimize for.
There has been a lot of research been done in the domain of Wireless Sensor Networks in recent years. Nowadays, Wireless Sensor Networks are in operation in a wide range of different scenarios and applications, like energy management services, heat and water billing as well as smoke detectors. However, research and development will be continued in this domain. During the operation of such a network, software updates need to be done seldom. In contrast to this, software updates need to be done very frequently during development and testing for uploading a new firmware on umpteen nodes. In this paper, we examine such a software update for a particular, but popular and often used sensor network platform. There are already interesting research papers about the process of updating sensor nodes. Our specific focus relies on the technical part of such an update process. We will argue why these already existing update processes do not cover our defiances. The objective of our software update protocol is to enable the developer to update many nodes in a reliable and very fast fashion during the development and testing process. For this reason, energy consumption is considered only marginally. We do not need a multi-hop protocol, due to the fact that all devices are in range, e.g. in a laboratory. In this paper we survey well known update protocols and architectures for software updates in WSN, discuss the solutions and compare them to our approach. As a conclusion of our extensive simulation follows to sum up that the developed protocols do a fast and scalable as well as a reliable update.
A considerable corpus of research on software evolution focuses on mining changes in software repositories, but omits their pre-integration history. We present a novel method for tracking this otherwise invisible evolution of software changes on mailing lists by connecting all early revisions of changes to their final version in repositories. Since artefact modifications on mailing lists are communicated by updates to fragments (i.e., patches) only, identifying semantically similar changes is a non-trivial task that our approach solves in a language-independent way. We evaluate our method on high-profile open source software (OSS) projects like the Linux kernel, and validate its high accuracy using an elaborately created ground truth. Our approach can be used to quantify properties of OSS development processes, which is an essential requirement for using OSS in reliable or safety-critical industrial products, where certifiability and conformance to processes are crucial. The high accuracy of our technique allows, to the best of our knowledge, for the first time to quantitatively determine if an open development process effectively aligns with given formal process requirements.
Machine learning (ML) based decision making is becoming commonplace. For persons affected by ML-based decisions, a certain level of transparency regarding the properties of the underlying ML model can be fundamental. In this vision paper, we propose to issue consumer labels for trained and published ML models. These labels primarily target machine learning lay persons, such as the operators of an ML system, the executors of decisions, and the decision subjects themselves. Provided that consumer labels comprehensively capture the characteristics of the trained ML model, consumers are enabled to recognize when human intelligence should supersede artificial intelligence. In the long run, we envision a service that generates these consumer labels (semi-)automatically. In this paper, we survey the requirements that an ML system should meet, and correspondingly, the properties that an ML consumer label could capture. We further discuss the feasibility of operationalizing and benchmarking these requirements in the automated generation of ML consumer labels.
Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models.
IOT Backdoors in Cars
(2019)
Connecting cheap IoT devices to the safety-critical network of a car can be an extremely bad idea, but at least it allows us to hack together our own automotive gadget. This talk explains the complete procedure involved in transforming a cheap OBD GSM dongle designed for fleet management into a open source automotive hacking tool. First, the hardware reverse engineering is demonstrated, showing how each component is interconnected and working together. With this knowledge, it was possible to capture the communication of the GSM module and understand the OTA protocol used by this dongle, which can be used to extract the firmware. A quick reverse engineering of the software will show that no cryptographic authentication is used for the OTA updates, and therefore a pirate GSM BTS can be used to obtain remote code execution. After that, a new open source firmware is written for the device, which can easily be extended and controlled remotely with the LUA scripting language. Examples on how hacking this dongle remotely can affect the safety of the driver will be also given.
Background and objective:
In this work, we present a systematic review concerning the recent enabling technologies as a tool to the diagnosis, treatment and better quality of life of patients diagnosed with Parkinson's Disease (PD), as well as an analysis of future trends on new approaches to this end.
Methods:
In this review, we compile a number of works published at some well-established databases, such as Science Direct, IEEEXplore, PubMed, Plos One, Multidisciplinary Digital Publishing Institute (MDPI), Association for Computing Machinery (ACM), Springer and Hindawi Publishing Corporation. Each selected work has been carefully analyzed in order to identify its objective, methodology and results.
Results:
The review showed the majority of works make use of signal-based data, which are often acquired by means of sensors. Also, we have observed the increasing number of works that employ virtual reality and e-health monitoring systems to increase the life quality of PD patients. Despite the different approaches found in the literature, almost all of them make use of some sort of machine learning mechanism to aid the automatic PD diagnosis.
Conclusions:
The main focus of this survey is to consider computer-assisted diagnosis, and how effective they can be when handling the problem of PD identification. Also, the main contribution of this review is to consider very recent works only, mainly from 2015 and 2016.
This is a report on a course taught at OTH Regensburg in the summer term of 2018. The students in this course built their own SQL-on-Hadoop engine as a term project in just 8 weeks. miniHive is written in Python and compiles SQL queries into MapReduce workflows. These are then executed on Hadoop. miniHive performs generic query optimizations (selection and projection pushdown, or cost-based join reordering), as well as MapReduce-specific optimizations.
The course was taught in English, using a flipped classroom model. The course material was mainly compiled from third-party teaching videos. This report describes the course setup, the miniHive milestones, and gives a short review of the most successful student projects.
A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer
(2019)
he Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implementation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.
To support a rational and efficient use of electrical energy in residential and industrial environments, Non-Intrusive Load Monitoring (NILM) provides several techniques to identify state and power consumption profiles of connected appliances. Design requirements for such systems include a low hardware and installations costs for residential, reliability and high-availability for industrial purposes, while keeping invasive interventions into the electrical infrastructure to a minimum. This work introduces a reference hardware setup that allows an in depth analysis of electrical energy consumption in industrial environments. To identify appliances and their consumption profile, appropriate identification algorithms are developed by the NILM community. To enable an evaluation of these algorithms on industrial appliances, we introduce the Laboratory-measured IndustriaL Appliance Characteristics (LILAC) dataset: 1302 measurements from one, two, and three concurrently running appliances of 15 appliance types, measured with the introduced testbed. To allow in-depth appliance consumption analysis, measurements were carried out with a sampling rate of 50 kHz and 16-bit amplitude resolution for voltage and current signals. We show in experiments that signal signatures, contained in the measurement data, allows one to distinguish the single measured electrical appliances with a baseline machine learning approach of nearly 100% accuracy.
Parkinson's disease (PD) is a neurodegenerative disease that affects millions of people worldwide, causing mental and mainly motor dysfunctions. The negative impact on the patient's daily routine has moved the science in search of new techniques that can reduce its negative effects and also identify the disease in individuals. One of the main motor characteristics of PD is the hand tremor faced by patients, which turns out to be a crucial information to be used towards a computer-aided diagnosis. In this context, we make use of handwriting dynamics data acquired from individuals when submitted to some tasks that measure abilities related to writing skills. This work proposes the application of recurrence plots to map the signals onto the image domain, which are further used to feed a Convolutional Neural Network for learning proper information that can help the automatic identification of PD. The proposed approach was assessed in a public dataset under several scenarios that comprise different combinations of deep-based architectures, image resolutions, and training set sizes. Experimental results showed significant accuracy improvement compared to our previous work with an average accuracy of over 87%. Moreover, it was observed an improvement in accuracy concerning the classification of patients (i.e., mean recognition rates above to 90%). The promising results showed the potential of the proposed approach towards the automatic identification of Parkinson's disease.
Internet of Thing (IoT) and Smart Grid (SG) are separate technologies. The digital transformation of the energy industry and the increasing digitalization in the private sector connect these technologies. Currently in Germany, the SG is under construction. In order to use future innovative services, SG and IoT must be combined. For this, we connect the SG Infrastructure with the IoT. A potential insecure device and network (IoT) should be able to transfer data to and from a critical infrastructure (SG). Open research question in this context are the security requirements architecture SG and IoT and the mechanism for authentication and authorisation in future application (SG and IoT). Due to the increasing networking of the systems (SG and IoT) new threats and attack vectors arise. The attacks to the architecture influence the target of authenticity, security and privacy. For the security analysis we focus on two communication points: the communication between the smart meter gateway, and the IoT device. In our example, a connected charging station with cloud services is connected with a SG infrastructure. To create a really smart service, the charging station needs a connection to the SG to get the current amount of renewable energy in the grid. With this two connections, new threats emerge. A security analysis over all the connections, including the vulnerability and the ability of an attacker, is developed in this paper. The analysis shows us challenges of the communication between IoT and SG. For this, we defined technical and organizational requirements for authentication and authorization. Current authentication and authorization mechanisms are no longer sufficient for the defined requirements. We present the Role-based trust model for Safety-critical Systems for these defined requirements. The new trust model is integrated into a role-based access control model. It defines data classes, which separate the sensitive and non-sensitive information.
SIM SIMulator
(2019)
Der Vortrag präsentierte ein Tool, das die Simulation einer SIM-Karte durch einen Standard-Mikrocontroller ermöglicht. Mit diesem Täuschungsmanöver kann die Authentifizierung des 3G-Mobilfunkstandards seitens der SIM-Karte umgangen werden. Tritt zusätzliche Hardware hinzu, die eine 3G-Basisstation vortäuscht, lässt sich eine Man-in-the-Middle-Attacke im 3G-Netz durchführen.
In diesem Szenario ist es möglich, die sensiblen Kommunikationsdaten, also den gesamten Datenverkehr beispielsweise zwischen einem Pkw und den Backend-Servern des Herstellers, auszulesen und zu untersuchen. Zudem eignet sich das Tool zum Pentesten von Modems oder SIM- beziehungsweise Smartcard-Applikationen. Der komplette Aufbau des sogenannten SIMulators ist als Open-Source-Software frei verfügbar und auf GitHub abrufbar.
Cloud Computing (CC), Internet of Thing (IoT) and Smart Grid (SG) are separate technologies. The digital transformation of the energy industry and the increasing digitalization in the private sector connect these technologies. At the moment, CC is used as a service provider for IoT. Currently in Germany, the SG is under construction and a cloud connection to the infrastructure has not been implemented yet. To build the SG cloud, the new laws for privacy must be implemented and therefore it’s important to know which data can be stored and distributed over a cloud. In order to be able to use future
innovative services, SG and IoT must be combined. For this, in
the next step we connect the SG infrastructure with the IoT.
A potential insecure device and network (IoT) should be able
to transfer data to and from a critical infrastructure (SG). In
detail, we focus on two different connections: the communication
between the smart meter switching box and the IoT device and the data transferred between the IoT and SG cloud. In our example, a connected charging station with cloud services is connected with a SG infrastructure. To create a really smart service, the charging station needs a connection to the SG to get the current amount of renewable energy in the grid. Private data, such as name, address and payment details, should not be transferred to the IoT cloud. With these two connections, new threads emerge. In this case, availability, confidentiality and integrity must be ensured. A risk analysis over all the cloud connections, including the vulnerability and the ability of an attacker and the resulting risk are developed in this paper.
This talk will provide a general overview on how Scapy can be used for automotive penetration testing. All present features of Scapy for automotive penetration will be introduced and explained. Also an overview of higher level automotive protocols will be given.
As automotive penetration testing becomes more important, the lack of free tools for automotive network penetration testing led us to integrate new features in Scapy. Scapy is a well established framework for packet manipulation. The flexibility of Scapy allowed us to implement automotive interfaces (CAN) and automotive protocols (ISOTP, GMLAN, UDS, DoIP, OBD-II).
This talk explains the basics of these automotive protocols, the workflow with Scapy for automotive network penetration testing. A live demonstration with some embedded hardware will be given.
Die im Presswerk eines Automobilzulieferers benötigte Lagerfläche stieg in den letzten Jahren so gewaltig an, dass die bisher verwendeten Produktionslose manuell reduziert werden mussten, um überhaupt einlagern zu können. Die dadurch sich ergebenden höheren Kosten werden quantifiziert. Zur Verbesserung werden die strukturellen Unterschiede zwischen den neuen Losen und den durch ein Standardverfahren in ERP-Systemen gelieferten Losen analysiert.
Diese Unterschiede zeigen, dass die 2. Stufe in diesem 2-stufigen Vorgehen – 1. Los berechnen und dann 2. dieses geeignet reduzieren – die Entscheidungen der ersten Stufe in der Regel vollständig revidiert. Ein integrativer Ansatz ist erforderlich. Dies erfolgt durch ein lineares Optimierungsmodell mit beschränkter Lager
fläche.
Das ILOG - Tool, ein Standardwerkzeug zur Lösung linearer
Optimierungsmodelle, löst dieses Modell innerhalb von Minuten.
Dass mit diesem optimalen Verfahren die Kosten substantiell verringert werden, wird simulativ nachgewiesen.
Seit einigen Monaten wird dieses Verfahren im Rahmen einer
rollenden Planung im Presswerk eingesetzt.
Eine rollende Planung bewirkt in der Regel einen Verlust in
der Lösungsgüte. In diesem Fall wurden nahezu optimale Lösungen erreicht, mit denen die Kosten substantiell verringert wurden. Die Werksleitung und die Mitarbeiter sind mit diesem neuen Verfahren sehr zufrieden.
In vielen Unternehmen werden von Menschen durch Ansehen von Produkten Produktionsfehler identifiziert.
Die übersehenen Produktionsfehler führen zunehmend häufiger zu teuren Kundenreklamationen. Für die Produktion von Fußbällen wird in dieser Arbeit diese manuelle Qualitätssicherung analysiert.
Als Alternative werden die Fußbälle durch einen Roboter fotografiert u
nd durch maschinelles Lernen werden Produktionsfehler auf diesen digitalen Fotos identifiziert.
Dieser Prüfprozess wird in SAP Leonardo implementiert und damit wird der bereits in einem SAP System implementierte Produktionsprozess
(bzw. Unternehmensprozess) erweitert, wodurch ein durchgängig digitalisierter Gesamtprozess implementiert wird. Dadurch werden signifikant bessere Prüfergebnisse erreichtb und die Durchlaufzeit
des Prüfprozesses wird substantiell reduziert.
Background: This article is based on an ongoing long-term study, in which customary motion trackers measure steps during rehabilitation of geriatric trauma patients (Med=86 years).
Objectives: Exploring steps after 28 days of measurement. Finding similarities in the data by running cluster analysis and formulating linear regressions models to predict steps through time.
Methods: Two types of motion trackers (FitBitAlta HR and Garmin vívofit 3) have been used to measure patients' (N=24) steps after hip fracture in two study groups. Cluster analysis detected three clusters for progress in number of steps that were tested for group differences with ANOVA. Regression analysis tested models for individual patients.
Results: Three-cluster solutions showed significant differences for the average amount of steps after 5, 14, 21 and 28 days. Regression models could predict 71 % of the individual patients' progress in study group 2.
Conclusion: The long-term study will provide more data in the future to examine the three-cluster solution and to find out in what stage of rehabilitation the measurement of the steps could be used to predict individual rehabilitation.
This paper introduces a custom framework for benchmarking software implementations from the National Institute of Standards and Technology (NIST) Lightweight Cryptography (LWC) project on embedded devices. We present the design and core functions of the framework and apply it to various NIST LWC authenticated encryption with associated data (AEAD) ciphers. Altogether, we evaluate the speed of 213 submitted algorithm vari-ants on four different microcontroller units (MCUs), including 32 bit ARM and 8 bit AVR architectures. To allow a more meaningful comparison, we also conduct code size tests on all four boards and RAM utilization tests on one test platform.
Der Beitrag widerspricht der These, es gäbe (bald) autonome Maschinen, so dass es einer Ethik für Maschinen bedürfe, denen diese Maschinen unterworfen sein müssten. Eine Ethik für autonome Maschinen wäre nur denkbar, wenn sich diese Maschinen jene Ethik selbst geben würden und dieser freiwillig folgten – genau dies bedeutet Autonomie. Jedes Regelwerk, das den Maschinen einprogrammiert oder anderweitig bindend aufgegeben werden würde, kann nicht als Maschinenethik verstanden werden. Das Hauptargument des Beitrags wird jedoch sein, dass Menschen den vermeintlich autonomen Maschinen diese Eigenschaft zuzuschreiben bereit sind, den Unterschied zwischen Menschen und Maschinen im Handeln verwischen und so erst die Rede von einer Maschinenethik provozieren. Abschließend wird eine moralische Norm formuliert, die jedoch keinerlei Maschinenethik impliziert, sondern eine moralische Norm für Menschen darstellt. Damit würde der Vermenschlichung von Maschinen entgegengewirkt und sicher gestellt, dass die Aktionen eines autonomen artifiziellen Agenten stets dessen Designern und Herstellern zugerechnet werden kann.
Cybersecurity in health
(2019)
Purpose
Cybersecurity in healthcare has become an urgent matter in recent years due to various malicious attacks on hospitals and other parts of the healthcare infrastructure. The purpose of this paper is to provide an outline of how core values of the health systems, such as the principles of biomedical ethics, are in a supportive or conflicting relation to cybersecurity.
Design/methodology/approach
This paper claims that it is possible to map the desiderata relevant to cybersecurity onto the four principles of medical ethics, i.e. beneficence, non-maleficence, autonomy and justice, and explore value conflicts in that way.
Findings
With respect to the question of how these principles should be balanced, there are reasons to think that the priority of autonomy relative to beneficence and non-maleficence in contemporary medical ethics could be extended to value conflicts in health-related cybersecurity.
Research limitations/implications
However, the tension between autonomy and justice, which relates to the desideratum of usability of information and communication technology systems, cannot be ignored even if one assumes that respect for autonomy should take priority over other moral concerns.
Originality/value
In terms of value conflicts, most discussions in healthcare deal with the conflict of balancing efficiency and privacy given the sensible nature of health information. In this paper, the authors provide a broader and more detailed outline.
Two concepts describe the autonomous deployment of IT by business entities: Shadow IT and Business-managed IT. Shadow IT is deployed covertly, that is, software, hardware, or IT services created/procured or managed by business entities without alignment with the IT organization. In contrast, Business-managed IT describes the overt deployment of IT, that is, in alignment with the IT organization or in a split responsibility model. The purpose of this paper is to extend the conceptual understanding of Shadow IT and Business-managed IT, comparing the perceptions of 29 CIOs and senior IT managers with the results of a systematic literature review. By doing so, this paper presents a structured and comprehensive view of causing factors, outcomes, and governance of Shadow IT and Business-managed IT in practice. A comparison of academic literature and practitioner perceptions reveals the limitations and gaps of the current research and highlights avenues for future research. The authors find three category-spanning themes occurring as causing factors, outcomes, and—as part of governance measures—factors to improve the IT organization: (1) (Poor) business-IT alignment (2) (lack of) agility, and (3) (lack of) policies. This study is innovative with its comprehensive qualitative interview data that the authors compare to the existing literature. Therefore, the paper brings together theoretical and practical insights into Shadow IT and Businessmanaged IT, which should aid practitioners and scholars in decision making and future research.
The main issues in many image processing applications are
object recognition and detection of objects, which answers the questions whether an object is present and if it is present, where it is located. Popular object detection algorithms like YOLO use a regression formulation for the whole problem, especially for the bounding box parameters. In production industry the setting usually is different: One usually knows the object type and rather wants to know with high precision where the object is. We study a prototype application in this area where we identify the rotation of an object in a plane. To solve this problem use a regression approach with a CNN architecture as a function approximator. We compare our results to standard image processing algorithms, which do not use neural networks, and present quantitative results on the accuracy.
CNNs seem at least competitive to classical image processing.
The success of artificial intelligence in medicine is based on the need for large amounts of high quality training data. Sharing of medical image data, however, is often restricted by laws such as doctor-patient confidentiality. Although there are publicly available medical datasets, their quality and quantity are often low. Moreover, datasets are often imbalanced and only represent a fraction of the images generated in hospitals or clinics and can thus usually only be used as training data for specific problems. The introduction of generative adversarial networks (GANs) provides a mean to generate artificial images by training two convolutional networks. This paper proposes a method which uses GANs trained on medical images in order to generate a large number of artificial images that could be used to train other artificial intelligence algorithms. This work is a first step towards alleviating data privacy concerns and being able to publicly share data that still contains a substantial amount of the information in the original private data. The method has been evaluated on several public datasets and quantitative and qualitative tests showing promising results.
Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.
Aims:
The delineation of outer margins of early Barrett's cancer can be challenging even for experienced endoscopists. Artificial intelligence (AI) could assist endoscopists faced with this task. As of date, there is very limited experience in this domain. In this study, we demonstrate the measure of overlap (Dice coefficient = D) between highly experienced Barrett endoscopists and an AI system in the delineation of cancer margins (segmentation task).
Methods:
An AI system with a deep convolutional neural network (CNN) was trained and tested on high-definition endoscopic images of early Barrett's cancer (n = 33) and normal Barrett's mucosa (n = 41). The reference standard for the segmentation task were the manual delineations of tumor margins by three highly experienced Barrett endoscopists. Training of the AI system included patch generation, patch augmentation and adjustment of the CNN weights. Then, the segmentation results from patch classification and thresholding of the class probabilities. Segmentation results were evaluated using the Dice coefficient (D).
Results:
The Dice coefficient (D) which can range between 0 (no overlap) and 1 (complete overlap) was computed only for images correctly classified by the AI-system as cancerous. At a threshold of t = 0.5, a mean value of D = 0.72 was computed.
Conclusions:
AI with CNN performed reasonably well in the segmentation of the tumor region in Barrett's cancer, at least when compared with expert Barrett's endoscopists. AI holds a lot of promise as a tool for better visualization of tumor margins but may need further improvement and enhancement especially in real-time settings.
The growing number of publications on the application of artificial intelligence (AI) in medicine underlines the enormous importance and potential of this emerging field of research.
In gastrointestinal endoscopy, AI has been applied to all segments of the gastrointestinal tract most importantly in the detection and characterization of colorectal polyps. However, AI research has been published also in the stomach and esophagus for both neoplastic and non-neoplastic disorders.
The various technical as well as medical aspects of AI, however, remain confusing especially for non-expert physicians.
This physician-engineer co-authored review explains the basic technical aspects of AI and provides a comprehensive overview of recent publications on AI in gastrointestinal endoscopy. Finally, a basic insight is offered into understanding publications on AI in gastrointestinal endoscopy.
IT Backsourcing
(2019)
With the growing importance of IT as competitive advantage, companies aim to increase their digital transformation activities. Consequently, companies are also revisiting their existing IT sourcing arrangements. In the article at hand, the authors explore the concept of IT backsourcing by presenting the results from a quantitative online survey with global IT practitioners. The authors confirm that backsourcing is frequently applied in practice, with key reasons being dissatisfaction with service or relationship quality and higher than expected costs. Further, the authors identify IT services with an increased likelihood of being backsourced, e.g., application development or data center, and discuss the effect of a CIO change on the backsourcing decision. In addition, the authors show that there are differences in the perceptions on the antecedents and the results of backsourcing decisions between management and operational level. The authors conclude with practical implications for IT managers based on their findings.
The sustainable development of processes is currently an important topic in industry and science. Various interest groups and other factors like resource shortages or a lack of skilled workers drive this trend. In the area of Production and Operation Management, the integration of sustainable aspects is also becoming more and more important. However, mainly ecological aspects are considered. Thus, this contribution expands the previous research by the integration of social aspects. For this, we limit our research to the Master Production Scheduling as part of operational production planning. In the context of sustainability, the Global Reporting Initiative (GRI) Standard has established itself for companies and other organisations to report on their sustainability activities. It is the first and most widely adopted global standards for sustainability reporting. The aim of this standard is to measure the sustainable performance and in this way, sustainability activities can be oriented to specific performance indicators. Accordingly, the GRI standard enables concrete initiatives to improve social, environmental and economic conditions for everyone. The modular standard comprises 36 standard modules. For this paper, the 19 social modules were analysed and relevant indicators that could be influenced by the Master Production Scheduling were identified. One of the important indicators from the GRI standard for our research question is the use of employees, measured in terms of hiring and employee turnover. The long-term employee commitment and thus low fluctuation will become a competitive advantage in the future, which is enhanced by the demographic change and the lack of skilled workers. In order to integrate these aspects based on the GRI standard, the classic formulation of Master Production Scheduling is extended. For this a control of the employee utilization within concrete utilization intervals as well as dependent processing times due to different exhaustion effects were integrated. Additionally, in order to take into account the requirements of a flexible production and to model the use of employees, aspects of personnel requirements planning are considered as well, which enable the build-up and reduction of capacity. As a result, social and economic consequences of related production scenarios are presented. In this context, a conflict of targets arises especially with macroeconomic fluctuations. On the one hand, shortages and inventories should be avoided from an economic point of view. On the other hand, a low and constant work intensity as well as a constant number of staff should be preferred for long-term employee commitment. Accordingly, the ability of the model to examine these questions, which will become more and more important for industrial enterprises in the future, becomes clear. Furthermore, it can be seen that it is possible to improve social aspects by adapted production planning without neglecting economic targets.
Im Rahmen des Asian-Pacific-Shared-Service-(APSS)-Projekts der Krones Gruppe soll der Standort Thailand bis 2020 als ein Shared Service Center (SSC) etabliert werden und zentrale Back-Office Aufgaben für sechs neue Service- und Vertriebsniederlassungen im Raums Asien-Pazifik (APAC) übernehmen. Um das Konzept eines SSCs technisch umzusetzen, sollen voneinander unabhängige ERP-Systeme für jede einzelne der neu gegründeten Niederlassungen implementiert werden. Bisher basierte der Inhalt solcher ERP Rollout-Projekte für Tochtergesellschaften der Krones AG auf keinerlei Standard. Deshalb sollte durch die Schaffung eines softwaregestützten Templates zur Definition von Layout und Report Anforderungen die Implementierung internationaler ERP-Systeme nicht nur vereinfacht werden, sondern ebenfalls ein Standardfragenkatalog zur Abwicklung solcher Projekte innerhalb des Krones Konzerns eingeführt werden. Davon erhoffte man sich eine beschleunigte Implementierung der weiteren ERP-Systeme für das APSS-Projekt.
Beim Edelmetallcontrolling von Infineon sind Fehler nur mit hohem Aufwand analysierbar und es bewirkt widersprechende Berechnungshinweise. Alle vier Perspektiven des Edelmetallcontrollings werden verbessert und als einzige Datenquelle wird die von SAP festgesetzt.
Weitere Verbesserungen werden durch die Umwandlung einer monatlichen
Kursänderungserhebung in eine jährliche, durch die Verwendung des Marktwerts für den Bestandswert sowie durch Ermittlung eines
Normwerts für eine jährliche Recyclingquote erzielt.
This article reviews the state of the art in research regarding sustainable extensions of hierarchical production planning. Sustainability is currently of considerable importance due to various interest groups. Hierarchical operational production planning and control is the state of the art in research as well as in industrial practice for planning of stations and their aggregations to production systems. Thus, it might be highly relevant to improve sustainability. In the literature mainly the scheduling level as well as the ecological dimension are considered. So the current research is limited to selected partial planning problems and incomplete with regard to sustainable aspects that emerge.
This article illustrates the problem of deficient working conditions like high work intensity and underlines the possibilities of production planning and control to improve them. However, an included literature search presents that previous approaches have predominantly considered social aspects within a short planning horizon and that there is a need for a long-term consideration. To address this research gap a linear optimization model for Master Production Scheduling is presented, which has been extended by basic aspects of personnel requirements planning and a control of personnel work intensity. The results show that improvements in working conditions can be achieved through adapted production planning without an automatically increase in costs. However , challenges in quantifying the corresponding social correlations are identified as well.
Kostensimulation
(2019)
In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2019 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m.
The number of patients with Barret’s esophagus (BE) has increased in the last decades. Considering the dangerousness of the disease and its evolution to adenocarcinoma, an early diagnosis of BE may provide a high probability of cancer remission. However, limitations regarding traditional methods of detection and management of BE demand alternative solutions. As such, computer-aided tools have been recently used to assist in this problem, but the challenge still persists. To manage the problem, we introduce the infinity Restricted Boltzmann Machines (iRBMs) to the task of automatic identification of Barrett’s esophagus from endoscopic images of the lower esophagus. Moreover, since iRBM requires a proper selection of its meta-parameters, we also present a discriminative iRBM fine-tuning using six meta-heuristic optimization techniques. We showed that iRBMs are suitable for the context since it provides competitive results, as well as the meta-heuristic techniques showed to be appropriate for such task.
Background: Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D-printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.
Methods: The goal of this experiment is to imitate human soft tissue with its haptic and elasticity for a realistic hand phantom fabrication, using only a dual-material 3D printer and support-material-filled metamaterial between skin and bone. We present our workflow to generate lattice structures between hard bone and soft skin with iterative cube edge (CE) or cube face (CF) unit cells. Cuboid and finger shaped sample prints with and without inner hard bone in different lattice thickness are constructed and 3D printed.
Results: The most elastic available rubber-like material is too firm to imitate soft tissue. By reducing the amount of rubber in the inner volume through support material (SUP), objects become significantly softer. Without metamaterial, after disintegration, the SUP can be shifted through the volume and thus the body loses its original shape. Although the CE design increases the elasticity, it cannot restore the fabric form. In contrast to CE, the CF design increases not only the elasticity but also guarantees a local limitation of the SUP. Therefore, the body retains its shape and internal bones remain in its intended place. Various unit cell sizes, lattice thickening and skin thickness regulate the rubber material and SUP ratio. Test prints with higher SUP and lower rubber material percentage appear softer and vice versa. This was confirmed by an expert surgeon evaluation. Subjects adjudged pure rubber-like material as too firm and samples only filled with SUP or lattice structure in CE design as not suitable for imitating tissue. 3D-printed finger samples in CF design were rated as realistic compared to the haptic of human tissue with a good palpable bone structure.
Conclusions: We developed a new dual-material 3D print technique to imitate soft tissue of the human hand with its haptic properties. Blowy SUP is trapped within a lattice structure to soften rubber-like 3D print material, which makes it possible to reproduce a realistic replica of human hand soft tissue.
One common method to fix fractures of the human hand after an accident is an osteosynthesis with Kirschner wires (K-wires) to stabilize the bone fragments. The insertion of K-wires is a delicate minimally invasive surgery, because surgeons operate almost without a sight. Since realistic training methods are time consuming, costly and insufficient, a virtual-reality (VR) based training system for the placement of K-wires was developed. As part of this, the current work deals with the real-time bone drilling simulation using a haptic force-feedback device.
To simulate the drilling, we introduce a virtual fixture based force-feedback drilling approach. By decomposition of the drilling task into individual phases, each phase can be handled individually to perfectly control the drilling procedure. We report about the related finite state machine (FSM), describe the haptic feedback of each state and explain, how to avoid jerking of the haptic force-feedback during state transition.
The usage of the virtual fixture approach results in a good haptic performance and a stable drilling behavior. This was confirmed by 26 expert surgeons, who evaluated the virtual drilling on the simulator and rated it as very realistic. To make the system even more convincing, we determined real drilling feed rates through experimental pig bone drilling and transferred them to our system. Due to a constant simulation thread we can guarantee a precise drilling motion.
Virtual fixtures based force-feedback calculation is able to simulate force-feedback assisted bone drilling with high quality and, thus, will have a great potential in developing medical applications.
IT-Backsourcing
(2019)
The importance of information systems (IS) for companies has grown rapidly over the recent years. For example, companies from various industries are focusing their efforts to digitize business models, to adopt agile forms of collaboration, and to leverage automation of routine tasks. This forces companies to review their IS sourcing strategies and to potentially consider alternatives for current outsourcing contracts.
One potential option could be to backsource IS services currently performed by external vendors. Based on empirical observations, previous academics defined it as the repatriation of all assets, activities, and skills needed to perform IS services back in-house, which had been outsourced previously to one or multiple vendors. The distinctive characteristic of backsourcing is the change in ownership back to the mother organization. This distinguishes the term backsourcing from similar terms, e.g., backshoring, reshoring, or relocating, which rather focus on the change in location of the service delivery. At the initial sourcing decision, the strategy to outsource an IS service was considered most promising. However, certain developments both inside the company, but also on a broader industry level may lead to a re-evaluation of the situation and thus to an adjustment of the original sourcing strategy.
Over the last years, there has been a strong increase in cloud-based services and a shift towards the deployment of standardized, almost “industrialized” applications as services. Before, large companies leveraged their size in realizing economies of scale within their sourcing volumes. The raise of cloud computing as well as standardization of service delivery decreased those benefits and led towards a decline in large outsourcing contracts across a variety of services and a growth of multi-sourcing of individual IS services from the respective “best-in-breed” vendors for shorter time periods. Besides sourcing the best vendor for each service, companies that implement multi-sourcing strategies additionally aim for cost benefits due to an increased competition between vendors, and an increase in agility and adaptability.
At the same time, a move towards multi-sourcing strategies with shorter contract durations leads to an increase in so-called “second generation sourcing decisions”, namely whether to continue outsourcing a respective service or to bring the service back in-house. Given the developments within the IS environment, this research projects aims at understanding why IS services are backsourced and which factors influence a company’s decision to backsource an IS service.
Practitioners have developed numerous frameworks around scaling agile and lean methods. These frameworks are more frequently being used in practice in large-scale system development activities. Therefore, they are in the scope of project portfolio management. The paper examines the conformity of selected scaling agile frameworks to the objectives of project portfolio management through qualitative analysis. Four prominent frameworks are investigated: Scrum of Scrums, Large cale Scrum, Disciplined Agile, and the Scaled Agile Framework. The study shows significant differences between the frameworks and their view on project portfolio management with the Scaled Agile Framework offering the most detailed portfolio management-relevant processes and roles. In contrast to that, Large Scale Scrum does not see the need for portfolio management but promotes self-managing teams. Two of the four frameworks share the perception shift from projects to the products, which has significant implications for practitioners because it challenges and redefines prevalent management roles and practices.
This paper is directed towards IT executives aiming to promote the adoption of cloud computing (CC) with in their company. We conducted a longitudinal case study on the evolving CC strategy and its implementation at the multinational company Continental, based on a previous case study by Loebbecke et al. (2012). We narrate Continental's pathway towards CC adoption, which comprised the experimentation, professionalization, and utilization of CC, and discuss current and previous barriers encountered during the implementation. We derive five lessons learned that can serve as practical guidance for executives aiming to accelerate CC adoption within their own organization: (1) differentiate the CC strategy by delivery model; (2)drive proof of concepts to generate reusable blueprints; (3) preinvest in the integration of IaaS and PaaS providers; (4) implement CC gradually, transforming applications during the transition to the cloud; and (5) disseminate knowledge within the organization to enable change.
Blockchain and GDPR
(2019)
Blockchain and the European General Data Protection Regulation (GDPR) are two topics that are currently highly discussed in academia and amongst professionals. The Blockchain technology is claimed to revolutionize how business is being conducted by its way of storing data and sharing it with others. The recently introduced GDPR has a huge impact on processing personal data because it brought major changes to privacy regulation. This might also affect the processing of personal data in Blockchain-based application scenarios. Based on literature analysis, the paper at hand provides an overview of the Blockchain technology, presents a decision model, and introduces two possible Blockchain application scenarios. It analyses relevant requirements of the GDPR in view of processing personal data and compares these with the fundamental principles of the Blockchain technology. The paper concludes that processing personal data in the Blockchain is violating the GDPR because it conflicts with fundamental specification of this regulation. This finding reveals the need for further research to propose concepts and frameworks for a GDPR-compliant processing of personal data using Blockchain technology.
This paper develops a categorization of cloud security risks, elaborates how they impact information security, and discusses potential security benefits from cloud sourcing. This review integrates the literature on information systems and computer science to summarize managerial and technological mitigation measures that enhance cloud security. The analysis uncovers gaps regarding the empirical investigation of security considerations in the corporate decision-making process. Specifically, the micro level of how security comes into play in the decision-making process and whether mitigation measures proposed by scholars are practically feasible requires further investigation. Furthermore, the macro level, how stakeholders perceive cloud sourcing, is not yet well understood.
Business-managed IT and Shadow IT describe the autonomous deployment/management of IT instances by business units. For the former, this happens in alignment with the IT organization of the enterprise, for the latter, without alignment. We analyze why and how Business-managed IT and Shadow IT transform knowledge sharing with two case studies drawing on the theoretical lens of the knowledge-based view. Several motivators lead to the autonomous implementation of knowledge management systems (KMSs), for example, shortcomings of existing systems. The implemented KMSs have multiple benefits for knowledge sharing, such as a reduction of knowledge sharing barriers. However, we notice that the Shadow IT KMS leads to challenges for cross-unit knowledge sharing due to the covert nature of Shadow IT. Based on the findings of our case studies, we develop a mid-range theory to explain the transformation of knowledge sharing in enterprises supported by Business-managed IT and Shadow IT.
A quantitative analysis of culture-induced differences in pivotal IT outsourcing contract features
(2019)
More than 30 years after its first implementation, IT outsourcing (ITO) is unanimously considered a critical component of corporate strategy for private and public institutions alike. While implementations of ITO around the world share some common characteristics like typical reasons for outsourcing, key success factors, or dimensions along which they can be classified, extant research also points to regional differences. However, research on this topic, specifically regarding pivotal contract features like contract value, contract length, or pricing methods, is still in its infancy, and quantitative analyses on the subject are particularly scarce. We address this research gap by analyzing data on 14,917 ITO contracts closed between 2007 and 2017 through the lens of cultural regions and three statistical methods. The contribution of our paper is threefold. First, our descriptive analysis points to globally decreasing contract lengths and contract values, confirming previous studies and practice reports. Second, an ANOVA with independent post-hoc testing provides quantitative supportfor the degree of dissimilarity among individual regions in pivotal ITO contract features. Finally, our quantitative replication of a previous study identifies culture-induced regional differences between USA and Japan regarding the effect of influence factors on ITO contract features.
Adaptive Moment Estimation (Adam) is a very popular training algorithm for deep neural networks, implemented in many machine learning frameworks. To the best of the authors knowledge no complete convergence analysis exists for Adam. The contribution of this paper is a method for the local convergence analysis in batch mode for a deterministic fixed training set, which gives necessary conditions for the hyperparameters of the Adam algorithm. Due to the local nature of the arguments the objective function can be non-convex but must be at least twice continuously differentiable.
One of the most popular training algorithms for deep neural networks is the Adaptive Moment Estimation (Adam) introduced by Kingma and Ba. Despite its success in many applications there is no satisfactory convergence analysis: only local convergence can be shown for batch mode under some restrictions on the hyperparameters, counterexamples exist for incremental mode. Recent results show that for simple quadratic objective functions limit cycles of period 2 exist in batch mode, but only for atypical hyperparameters, and only for the algorithm without bias correction. We extend the convergence analysis to all choices of the hyperparameters for quadratic functions. This finally answers the question of convergence for Adam in batch mode to the negative. We analyze the stability of these limit cycles and relate our analysis to other results where approximate convergence was shown, but under the additional assumption of bounded gradients which does not apply to quadratic functions. The investigation heavily relies on the use of computer algebra due to the complexity of the equations.
Shadow IT and Business-managed IT describe the autonomous deployment/procurement or management of Information Technology (IT) instances, i.e., software, hardware, or IT services, by business entities. For Shadow IT, this happens covertly, i.e., without alignment with the IT organization; for Business-managed IT this happens overtly, i.e., in alignment with the IT organization or in a split responsibility model. We conduct a systematic literature review and structure the identified research themes in a framework of causing factors, outcomes, and governance. As causing factors, we identify enablers, motivators, and missing barriers. Outcomes can be benefits as well as risks/shortcomings of Shadow IT and Business-managed IT. Concerning governance, we distinguish two subcategories: general governance for Shadow IT and Business-managed IT and instance governance for overt Business-managed IT. Thus, a specific set of governance approaches exists for Business-managed IT that cannot be applied to Shadow IT due to its covert nature. Hence, we extend the existing conceptual understanding and allocate research themes to Shadow IT, Business-managed IT, or both concepts and particularly distinguish the governance of the two concepts. Besides, we find that governance themes have been the primary research focus since 2016, whereas older publications (until 2015) focused on causing factors.
Purpose
This paper aims to investigate the main tasks, necessary skills, and the implementation of the offshore coordinator’s role to facilitate knowledge transfer in information systems (IS) offshoring.
Design/methodology/approach
This empirical exploratory study uses the classical Delphi method that includes one qualitative and two quantitative rounds to collect data on IS experts’ perceptions to seek a consensus among them.
Findings
The participants agreed, with strong consensus, for a set of 16 tasks and 15 skills. The tasks focused primarily on relationship management and facilitating knowledge transfer on different levels. The set of skills consists of approximately 25 per cent “hard” skills, e.g. professional language skills and project management skills, and approximately 75 per cent “soft” skills, e.g. interpersonal and communication skills and the ability to deal with conflict. Two factors mainly influence implementing the offshore coordinator role: project size and the number of projects to be supported simultaneously.
Practical implications
The findings provide indications of how to define and fulfill this crucial role in practice to facilitate the knowledge transfer process in a positive way.
Originality/value
Similarities in previous research findings are aggregated to examine the intermediate role in detail from a consolidated perspective. This results in the first comprehensive set of critical tasks and skills assigned to the competency dimensions of the universal competency framework, demonstrating which and how many competency dimensions are critical.
Incomplete conceptualization of the information technology outsourcing (ITO) literature represents a challenge for navigating extant research and engaging into purposeful academic discourse. We extend the analysis of empirical findings on determinants of ITO decisions, outcomes, and governance. We identify increasing levels of research maturity, analyze effects of 38 new independent variables, highlight contradictory findings, and observe increasing interest in emerging topics such as innovation through ITO and multisourcing.
To prepare their IT landscape for future business challenges, companies are changing their IT sourcing arrangements by using selective sourcing approaches as well as multi-sourcing with more but smaller sourcing contracts. Companies therefore have to reconsider and re-evaluate their IT sourcing setup more frequently. Collecting data from 251 global experts, we empirically tested the effect of service quality, relationship quality, and switching costs on IT sourcing decisions using partial least squares (PLS) analysis. Drawing on previously conducted expert interviews, our model extends previous studies and introduces a decision maker’s sourcing preferences as a not yet examined moderator on IT sourcing decisions. This allows us to investigate the influence of the decision maker’s beliefs on the decision process. We were able to confirm the negative effect of switching costs on a decision in favor of backsourcing, however we could not find significant support for the remaining hypotheses. We further discuss potential reasons for our findings and suggest future research opportunities based on our contribution.
Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma
(2019)
Computer-aided diagnosis using deep learning (CAD-DL) may be an instrument to improve endoscopic assessment of Barrett’s oesophagus
(BE) and early oesophageal adenocarcinoma (EAC). Based on still images from two databases, the diagnosis of EAC by CAD-DL reached sensitivities/specificities of 97%/88% (Augsburg data) and 92%/100% (Medical Image Computing and Computer-Assisted Intervention [MICCAI]
data) for white light (WL) images and 94%/80% for narrow band images (NBI) (Augsburg data), respectively. Tumour margins delineated by
experts into images were detected satisfactorily with a Dice coefficient (D) of 0.72. This could be a first step towards CAD-DL for BE assessment. If developed further, it could become a useful
adjunctive tool for patient management.