Digitalisierung
Refine
Year of publication
- 2018 (43) (remove)
Document Type
- conference proceeding (article) (26)
- Article (11)
- conference proceeding (presentation, abstract) (3)
- Book (1)
- Part of a Book (1)
- Report (1)
Has Fulltext
- no (43)
Is part of the Bibliography
- no (43)
Keywords
- Backsourcing (2)
- Betriebliches Informationssystem (2)
- Datenerhebung (2)
- Information systems (2)
- Informationstechnik (2)
- Literaturbericht (2)
- NoSQL databases (2)
- Unternehmen (2)
- data migration (2)
- ray tracing (2)
Institute
- Fakultät Informatik und Mathematik (26)
- Fakultät Elektro- und Informationstechnik (14)
- Laboratory for Safe and Secure Systems (LAS3) (13)
- Labor für Digitalisierung (LFD) (5)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (4)
- Regensburg Strategic IT Management (ReSITM) (4)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (3)
- Fakultät Maschinenbau (2)
- Regensburg Center of Biomedical Engineering - RCBE (2)
- FuE-Anwenderzentrum Informations- und Kommunikationstechnologien (IKT) (1)
Begutachtungsstatus
- peer-reviewed (19)
Investigating temporal variability of functional connectivity is an emerging field in connectomics. Entering dynamic functional connectivity by applying sliding window techniques on resting-state fMRI (rs-fMRI) time courses emerged from this topic. We introduce frequency-resolved dynamic functional connectivity (frdFC) by means of multivariate empirical mode decomposition (MEMD) followed up by filter-bank investigations. In general, we find that MEMD is capable of generating time courses to perform frdFC and we discover that the structure of connectivity-states is robust over frequency scales and even becomes more evident with decreasing frequency. This scale-stability varies with the number of extracted clusters when applying k-means. We find a scale-stability drop-off from k = 4 to k = 5 extracted connectivity-states, which is corroborated by null-models, simulations, theoretical considerations, filter-banks, and scale-adjusted windows. Our filter-bank studies show that filter design is more delicate in the rs-fMRI than in the simulated case. Besides offering a baseline for further frdFC research, we suggest and demonstrate the use of scale-stability as a possible quality criterion for connectivity-state and model selection. We present first evidence showing that connectivity-states are both a multivariate, and a multiscale phenomenon. A data repository of our frequency-resolved time-series is provided.
Increasing user participation or changing behavior are key goals when applying gamification. Existing studies in domains such as education, health, and enterprise show that gamification can have a positive impact on meeting these goals. However, there is still a lack of detailed insights into how certain game design elements affect user behavior and motivation. To gain further insight, this paper presents a user study in the field with 20, 000 participants of a mobile e-commerce application over a one-month time period to analyze the impact of gamification in the e-commerce domain and to compare the effectiveness of tangible versus intangible rewards. Results show that gamification has a positive impact in the e-commerce domain. The study also reveals that tangible rewards increase the user activity substantially more than intangible rewards. We further show how tangible rewards affect certain user types and provide a first discussion on the lastingness of these rewards.
We present jHound, a tool for profiling large collections of JSON data, and apply it to thousands of data sets holding open government data. jHound reports key characteristics of JSON documents, such as their nesting depth. As we show, jHound can help detect structural outliers, and most importantly, badly encoded documents: jHound can pinpoint certain cases of documents that use string-typed values where other native JSON datatypes would have been a better match. Moreover, we can detect certain cases of maladaptively structured JSON documents, which obviously do not comply with good data modeling practices. By interactively exploring particular example documents, we hope to inspire discussions in the community about what makes a good JSON encoding.
Neu eingeführte Funktionen in der Automobilindustrie, wie zum Beispiel das autonome Fahren, erfordern den Einsatz von performanten Mehrkernprozessoren sowie von komplexen (POSIXkompatiblen) Betriebssystemen. Im Rahmen des branchenspezifischen Preisdrucks kommt es zudem zu einer Konsolidierung von Steuergeräten. Gleichzeitig erfordert der Einsatz im Automobil hohe funktionale Sicherheit (ASIL-Level), was unter anderem robuste Echtzeiteigenschaften der verwendeten Hard- und Software voraussetzt. Als Folge dessen werden zur Trennung von harten und weichen Echtzeitsystemen auf derselben Hardware Hypervisoren eingesetzt. Dieses Paper beleuchtet die Latenzauswirkungen diverser Softwarekonfigurationen auf Hardware der nächsten Generation mithilfe eines vorgestellten Testsetups und dessen Ergebnissen.
Im Beitrag wird das Konzept mit JiTT-Einheiten, klassischen Vorlesungen, Übungen und dem Praktikum vorgestellt. Insbesondere erfolgt eine Beschreibung der Struktur der Lehrtexte und der zugehörenden Fragen. Nachfolgend werden Herausforderungen bei der Umsetzung des Konzeptes dargestellt. Betrachtet wird hier die Entscheidung über Anzahl der Lehrtexte sowie die Erstellung derselben durch die Autorin sowie die Umsetzung des gesamten Konzeptes auf einer Online-Plattform. Abschließend werden die Reaktionen und Rückmeldungen der Studierenden zu dem neuen Lehrkonzept dargestellt. Basis sind hierbei die Auswertung der Beantwortung der Fragen in den JiTT-Einheiten sowie Befragungen der Studierenden mit verschiedenen Fragebögen.
The use of compliant tensegrity structures in robotic applications offers several advantageous properties. In this work the dynamic behaviour of a planar tensegrity structure with multiple static equilibrium configurations is analysed, with respect to its further use in a two-finger-gripper application. In this application, two equilibrium configurations of the structure correspond to the opened and closed states of the gripper. The transition between these equilibrium configurations, caused by a proper selected actuation method, is essentially dependent on the actuation parameters and on the system parameters. To study the behaviour of the dynamic system and possible actuation methods, the nonlinear equations of motion are derived and transient dynamic analyses are performed. The movement behaviour is analysed in relation to the prestress of the structure and actuation parameters.
Building applications for processing data lakes is a software engineering challenge. We present Darwin, a middleware for applications that operate on variational data. This concerns data with heterogeneous structure, usually stored within a schema-flexible NoSQL database. Darwin assists application developers in essential data and schema curation tasks: Upon request, Darwin extracts a schema description, discovers the history of schema versions, and proposes mappings between these versions. Users of Darwin may interactively choose which mappings are most realistic. Darwin is further capable of rewriting queries at runtime, to ensure that queries also comply with legacy data. Alternatively, Darwin can migrate legacy data to reduce the structural heterogeneity. Using Darwin, developers may thus evolve their data in sync with their code. In our hands-on demo, we curate synthetic as well as real-life datasets.
This paper aims to provide an overview of the interdisciplinary combination of educational science, psychology, software engineering and the eye tracking methodology. The domain of software engineering is offering great potential for applied eye tracking research and in turn it can benefit from the possibilities of this upcoming technology as well. Nevertheless, software engineering has to struggle with some obstacles. These are namely the different terms, missing guidelines for experimental setups and a lack of common and standardized metrics. If eye tracking should be used in a broader way these problems must be solved. The main purpose of this paper is to list all eye tracking metrics which are relevant for software engineering and to give guidelines to help beginners by avoiding possible pitfalls.
Debian, as a collection of software packages and components, is known to be one of the largest software projects in the history of mankind. Combined with a traceable history over many years, the artefacts created by Debian developers and users make it one of science’s favourite targets to quantitatively or qualitatively understand how real-world software development works (or does not), how people collaborate, and many other other related questions. Unfortunately, while scientists make ample use of the resources and artefacts created by FLOSS and friends, the exchange of insights and ideas does not seem to extend in both directions: Developers, users and integrators are often unaware of results obtained in science. This talk will introduce the Debian community to a selection the most important results obtained by scientific (software engineering) research, with a special focus on large-scale socio-technical analysis of projects like Debian, and the possible implications and improvements these may bring to Debian development itself.
Embedded Linux drives an every-increasing number of appliances in many domains and applications, some even real-time and/or safety critical. Traditional quality assurance of such systems is based on testing and formal verification, but the huge amount of code and the rapid dynamics of the Linux ecosystem, as well as fundamental limitations of formal methods make these approaches unsatisfactory.
Statistical quality assurance for reliability, error rates, maximal latencies etc. is needed. We will discuss current best practises, how to design and run automated statistical tests that capture relevant information, and how to properly evaluate the resulting data. Practical real-world examples and recipes are played through using the open source R language. Most importantly, we identify common mistakes in (over-)interpreting statistical results and predictions that may eventually harm people.
This paper describes our implementation, teaching philosophy, and experiences with our C-based version of the widely known Karel the Robot introductory programming micro-language. Karel enables students to programmatically solve problems, using the C language, in a graphical two-dimensional world by moving the robot around while checking and manipulating its surroundings. We use Karel to solve the dilemma of either demanding too much or not enough from students during the first weeks of an introductory CS course, as interesting problems can be solved with limited input from lectures. Karel enables problem solving from day one of CS1, and encourages good software engineering practices such as top-down design from the beginning. We outline typical problems in the first weeks of CS1. We present a short overview of existing Karel implementations in various programming languages and our rationale for re-implementing Karel. We present our teaching philosophy and use of Karel in the classroom. We demonstrate how Karel is being used from a student perspective, along with a typical programming task. We discuss preliminary results of a survey and interviews with students from a first course in which Karel was used.
Schema-flexible NoSQL data stores lend themselves nicely for storing versioned data, a product of schema evolution. In this lightning talk, we apply pending schema changes to records that have been persisted several schema versions back. We present first experiments with MongoDB and Cassandra, where we explore the trade-off between applying chains of pending changes stepwise (one after the other), and as composite operations. Contrary to intuition, composite migration is not necessarily faster. The culprit is the computational overhead for deriving the compositions. However, caching composition formulae achieves a speed up: For Cassandra, we can cut the runtime by nearly 80%. Surprisingly, the relative speedup seems to be system-dependent. Our take away message is that in applying pending schema changes in NoSQL data stores, we need to base our design decisions on experimental evidence rather than on intuition alone.
In this paper we describe and evaluate an implementation of CPU-style SIMD ray traversal on the GPU. We show how spreading moderately wide BVHs (up to a branching factor of eight) across multiple threads in a warp can improve performance while not requiring expensive pre-processing. The presented ray-traversal method exhibits improved traversal performance especially for increasingly incoherent rays.
Subdivision surfaces, especially with displacement, are one of the key modeling primitives used in high-quality rendering environments, such as, e.g., movie production. While their use easily maps to rasterization-based frameworks, they pose a significant challenge for ray tracing environments. This is due to the fact that incoherent access patterns require storing or caching fully tessellated and displaced meshes for efficient intersection computations. In this paper we use a two-tier hierarchy built on a scene's patches. It relies on compressed and quantized bounding volumes on the second tier to reduce the size of the BVH itself. Based on this acceleration structure, we propose a quantized, compact approximation for leaf nodes while being faithful to the underlying patch-geometry. We build on recent advances and present a system that shows competitive performance regarding run-time speed, which is close to full-resolution pre-tessellation methods as well as to previous compression approaches. Ultimately, we provide strong compression of up to a factor of 5: 1 compared to state-of-the-art methods while maintaining high geometrical fidelity surpassing similarly compact approximations and getting close to uncompressed geometry.
In this paper we present a scattering-based method to compute high quality depth of field in real time. Relying on multiple layers of scene data, our method naturally supports settings with partial occlusion, an important effect that is often disregarded by real time approaches. Using well-founded layer-reduction techniques and efficient mapping to the GPU, our approach out-performs established approaches with a similar high-quality feature set. Our proposed algorithm works by collecting a multi-layer image, which is then directly reduced to only keep hidden fragments close to discontinuities. Fragments are then further reduced by merging and then splatted to screen-space tiles. The per-tile information is then sorted and accumulated in order, yielding an overall approach that supports partial occlusion as well as properly ordered blending of the out-of-focus fragments.
Drohnen werden inzwischen in vielen und sehr unterschiedlichen Kon-texten verwendet. Aus dem Blickwinkel der Technikfolgenabschätzung (TA) scheint es daher sinnvoll, den Umfang der momentanen und zu-künftigen Nutzung von Drohnen und daraus resultierende Implikatio-nen näher zu beleuchten und eine Bestandsaufnahme durchzuführen. Darüber hinaus sollen die voraussichtlichen Pfade der weiteren Tech-nikentwicklung, relevante Akteure und deren Interessenslage sowie zu-künftige Anwendungspotenziale und Einsatzfelder analysiert werden
Background and objective
Parkinson’s disease (PD) is considered a degenerative disorder that affects the motor system, which may cause tremors, micrography, and the freezing of gait. Although PD is related to the lack of dopamine, the triggering process of its development is not fully understood yet.
Methods
In this work, we introduce convolutional neural networks to learn features from images produced by handwritten dynamics, which capture different information during the individual’s assessment. Additionally, we make available a dataset composed of images and signal-based data to foster the research related to computer-aided PD diagnosis.
Results
The proposed approach was compared against raw data and texture-based descriptors, showing suitable results, mainly in the context of early stage detection, with results nearly to 95%.
Conclusions
The analysis of handwritten dynamics using deep learning techniques showed to be useful for automatic Parkinson’s disease identification, as well as it can outperform handcrafted features.
This article provides a mathematical analysis of singular (nonsmooth) artifacts added to reconstructions by filtered backprojection (FBP) type algorithms for X-ray computed tomography (CT) with arbitrary incomplete data. We prove that these singular artifacts arise from points at the boundary of the data set. Our results show that, depending on the geometry of this boundary, two types of artifacts can arise: object-dependent and object-independent artifacts. Object-dependent artifacts are generated by singularities of the object being scanned, and these artifacts can extend along lines. They generalize the streak artifacts observed in limited-angle tomography. Object-independent artifacts, on the other hand, are essentially independent of the object and take one of two forms: streaks on lines if the boundary of the data set is not smooth at a point and curved artifacts if the boundary is smooth locally. We prove that these streak and curve artifacts are the only singular artifacts that can occur for FBP in the continuous case. In addition to the geometric description of artifacts, the article provides characterizations of their strength in Sobolev scale in certain cases. The results of this article apply to the well-known incomplete data problems, including limited-angle and regionof-interest tomography, as well as to unconventional X-ray CT imaging setups that arise in new practical applications. Reconstructions from simulated and real data are analyzed to illustrate our theorems, including the reconstruction that motivated this work a synchrotron data set in which artifacts appear on lines that have no relation to the object.
Controller Area Network (CAN) is still the most used network technology in today's connected cars. Now and in the near future, penetration tests in the area of automotive security will still require tools for CAN media access. More and more open source automotive penetration tools and frameworks are presented by researchers on various conferences, all with different properties in terms of usability, features and supported use-cases. Choosing a proper tool for security investigations in automotive network poses a challenge, since lots of different solutions are available. This paper compares currently available CAN media access solutions and gives advice on competitive hard-and software tools for automotive penetration testing.
The automotive industry currently faces several challenges, including a growing complexity in system architecture. At the same time, the task load as well as the needs for performance increase. To address this problem, the A3Fa research project evaluates scalable distributed concepts for future vehicle system architectures. These can be seen as comparable to cluster-computing systems, which are applied in high-performance or high-availability use-cases. Methods used in such scenarios will also be important features in future vehicle architectures such as horizontal application scalability, application load balancing and reallocation, as well as functionality upgrades triggered by the user.
This paper focuses on concepts and methods for the reliability of applications and hardware in future in-vehicle distributed system architectures. It is argued that future automotive computing systems will evolve towards enterprise IT systems similar to today’s data centers. Furthermore, it is stated these vehicle systems can benefit greatly from IT systems.
In particular, the safety against failure of functions and hardware in such systems is discussed. For this purpose, various of such mechanisms used in information technology are investigated. A layer-based classification is proposed, representing the different fail-safe levels.
With parallel applications becoming more and more popular even in real-time systems, the demand for safe and easyto- use software libraries and frameworks for parallel and concurrent computations is growing immensely. These frameworks usually provide an implementation for different sets of software patterns. A very well known software pattern for concurrency is the Active Object pattern, that allows various threads to have synchronized access to an object in question. This paper presents the Parallel Active Object pattern, which extends the common Active Object pattern to support the use of objects, whose computations are profoundly enhanced by a parallel execution. Furthermore, a C++ software framework is introduced, which implements the Parallel Active Object pattern and thus provides the possibility of using task or data parallel patterns, for example Map, Reduce and Divide-and-Conquer, on the active object's calculations. The proposed framework is evaluated against two other popular libraries, namely OpenMP and Intel Threading Building Blocks. Through utilization of the C++11 standard and template classes a simple user interface is provided, which abstracts the distribution of workloads among the worker threads. By making use of the C++ Standard Template Library the framework can easily be ported to embedded systems and by extending the pattern through real-time capabilities, which ensure a timely and reliable execution of the method requests, the intention of providing the framework for time critical environments is also targeted in the future.
Development trends for computing platforms moved from increasing the frequency of a single processor to increasing the parallelism with multiple cores on the same die. Multiple cores have strong potential to support cost-efficient fault tolerance due to their inherent spatial redundancy. This work makes a step towards software-only fault tolerance in the presence of permanent and transient hardware faults. Our approach utilizes software-based spatial triple modular redundancy and coded processing on a shared memory multi-core controller. We evaluate our approach on an Infineon AURIX TriBoard TC277 and provide experimental evidence for error resistance by fault injection campaigns with an iSystem iC5000 On-chip Analyzer.
Regarding the actual automotive safety norms the use of artificial intelligence (AI) in safety critical environments like autonomous driving is not possible. This paper introduces a new conceptual safety modelling approach and a safety argumentation to certify AI algorithms in a safety related context. Therefore, a model of an AI-system is presented first. Afterwards, methods and safety argumentation are applied to the model, whereas it is limited to a specific subset of AI-systems, i.e. off-board learning deterministic neural networks in this case. Other cases are left over for future research. The result is a consistent safety analysis approach that applies state of the art safety argumentations from other domains to the automotive domain. This will enforce the adaptation of the functional safety norm ISO26262 to enable general AI methods in safety critical systems in future.
This study is based on the work of Uwano, Nakamura, Monden and Matsumoto (2006) who tried to identify programmers’ eye movements in source code reviews by using eye tracking technology. The researchers were able to identify certain eye movement patterns but due to the technical limitations of earlier eye tracking systems and a small sample they could not find a valid proof for their existence. Now, twelve years later, the eye tracking technology has made significant improvements and is able to capture programmers’ reading behavior in an unobtrusive and precise way. Now the goal is to verify the described patterns by using eye tracking data from expert and novice programmers. In the experiment they have to detect errors in six different codes and take part in a retrospective interview. At the moment, data collections are ongoing. At the time of the conference, we will present the results of our analyses.
Die Beherrschung von Komplexität ist eine der größten Engineering-Herausforderungen des 21. Jahrhunderts. Themen wie das „Internet der Dinge“ (IoT) und „Industrie 4.0“ beschleunigen diesen Trend. Die modellgetriebene Entwicklung leistet einen entscheidenden Beitrag, um diesen Herausforderungen erfolgreich begegnen zu können.
Die Autoren geben einen fundierten Einstieg und praxisorientierten Überblick über die Modellierung von Software für eingebettete Systeme von den Anforderungen über die Architektur bis zum Design, der Codegenerierung und dem Testen. Für jede Phase werden Paradigmen,
Methoden, Techniken und Werkzeuge beschrieben und ihre praktische Anwendung in den Vordergrund gestellt. Darüber hinaus wird auf die Integration von Werkzeugen, funktionale Sicherheit und Metamodellierung eingegangen sowie die Einführung eines modellbasierten Ansatzes in einer Organisation und die Notwendigkeit zum lebenslangen Lernen erläutert.
Der Leser erfährt in diesem Buch, wie ein modellbasiertes Vorgehen nutzbringend in der Praxis für die Softwareentwicklung eingesetzt wird. Das Vorgehen wird unabhängig von Modellierungswerkzeugen vorgestellt. Zahlreiche Beispiele – exemplarisch auch auf Basis konkreter Werkzeuge – helfen bei der praktischen Umsetzung.
Today, due to the rapidly evolving technology within the automotive industry, the automation level of cars is continuously increasing. As a consequence, the software code base implementing the automated driving functionality is growing in both, complexity and size. Simultaneously, the semiconductor industry continues with structure and voltage downscaling due to diminishing design margins and stringent power constraints. This trend leads to highly integrated hardware on the one hand, whilst provoking an increase in sensitivity against external causes for hardware faults, e.g., radiation effects or electromagnetic interference. Among the available dependability assessment techniques, fault injection (FI) is widely adopted and ISO 26262 strongly recommends applying it to validate, that functional and technical safety mechanisms are implemented correctly and effectively. We present PyFI (Python backend for Fault Injection), a fault injection backend for the Infineon Aurix TriCore which utilizes an iSystem On-chip Analyzer to inject faults into the application data or instructions that are visible at the assembly level. PyFI allows the injection of bit flips and stuck-at faults in memory and register cells of the hardware which trigger our error symptoms on application level. Furthermore, it implements fault collapsing algorithms to reduce the number of faults and the duration for single experiments by gathering statistics about the static and dynamic application execution.
Future embedded systems demand increasingly more computation performance, which can only be provided by exploiting parallelism in real-time applications. Due to scheduling and scalability issues, parallelism still is an open issue especially in hard real-time systems. In this work-in-progress paper, we describe and discuss a hierarchical gang-scheduling based approach to address the scalability issue. We use gang scheduling to schedule tasks consisting of multiple kernel level threads. The execution budget provided by the kernel level threads is used for scheduling of user level threads using a lightweight threading library running in user space. Further, the first steps towards an implementation in the realtime operating system kernel Erika OS are described and possible benefits and risks of this approach are shown.
Technik in der Pflege
(2018)
With examples concerning the development and dissemination of computer technology in the Soviet Union, the U.S., and other Western countries it shall be demonstrated that computer development on the one hand
and social change as well as changes in policy making and administration on the other hand are mingled with each other without a clear direction of causation being discernible.It also shall be shown that perceived social and political threats imposed by early computer technology sometimes actually helped to stop or at least slow down social change.
One conclusion that can be drawn from the case studies described for RRI is that the conscious steering of innovations fails because of diffuse and uncoordinated resistance from very different stakeholders. The case studies also suggest that the effectiveness of RRI might be rather limited.
Die fortschreitende Digitalisierung des Gesundheitswesens ebnet auch den Weg einer stärkeren Verbreitung der telemedizinischen Versorgung in Deutschland. Die Politik erleichtert diesen Weg, z.B. mit dem eHealth-Gesetz oder mit der Lockerung des "Fernbehandlungsverbots" durch den Deutschen Ärztetag. Dabei wird schon aus der Definition (siehe z.B. die unten stehende Definition der AG Telemedizin) klar, dass Telemedizin keine neue medizinische Disziplin begründet. Bei Telemedizin handelt es sich nur um ein technisches Werkzeug in Kombination mit geeigneten klinischen Prozessen, damit bestehende medizinische Expertise unter Einsatz von Informations- und Kommunikationstechnologien räumliche Entfernungen oder einen zeitlichen Versatz überbrücken kann. Patientinnen und Patienten können also behandelt werden, auch wenn sie vom Arzt räumlich entfernt sind.
Purpose
Age-related macular degeneration (AMD) is a common threat to vision. While classification of disease stages is critical to understanding disease risk and progression, several systems based on color fundus photographs are known. Most of these require in-depth and time-consuming analysis of fundus images. Herein, we present an automated computer-based classification algorithm.
Design Algorithm development for AMD classification based on a large collection of color fundus images. Validation is performed on a cross-sectional, population-based study.
Participants.
We included 120 656 manually graded color fundus images from 3654 Age-Related Eye Disease Study (AREDS) participants. AREDS participants were >55 years of age, and non-AMD sight-threatening diseases were excluded at recruitment. In addition, performance of our algorithm was evaluated in 5555 fundus images from the population-based Kooperative Gesundheitsforschung in der Region Augsburg (KORA; Cooperative Health Research in the Region of Augsburg) study.
Methods.
We defined 13 classes (9 AREDS steps, 3 late AMD stages, and 1 for ungradable images) and trained several convolution deep learning architectures. An ensemble of network architectures improved prediction accuracy. An independent dataset was used to evaluate the performance of our algorithm in a population-based study.
Main Outcome Measures.
κ Statistics and accuracy to evaluate the concordance between predicted and expert human grader classification.
Results.
A network ensemble of 6 different neural net architectures predicted the 13 classes in the AREDS test set with a quadratic weighted κ of 92% (95% confidence interval, 89%–92%) and an overall accuracy of 63.3%. In the independent KORA dataset, images wrongly classified as AMD were mainly the result of a macular reflex observed in young individuals. By restricting the KORA analysis to individuals >55 years of age and prior exclusion of other retinopathies, the weighted and unweighted κ increased to 50% and 63%, respectively. Importantly, the algorithm detected 84.2% of all fundus images with definite signs of early or late AMD. Overall, 94.3% of healthy fundus images were classified correctly.
Conclusions
Our deep learning algoritm revealed a weighted κ outperforming human graders in the AREDS study and is suitable to classify AMD fundus images in other datasets using individuals >55 years of age.
There is a common understanding amongst academics that information systems (IS) research sometimes has limited relevance for practitioners. This can be explained by the fact that research lags behind the fast moving IS environment, has limited practical applicability and is hard to access. We have focused on the research area of IS backsourcing, and analyzed practitioner literature to increase our understanding of topics of interest for practitioners, to determine a potential gap between academic and practitioner literature, and to identify future research directions in this field. We observed that most publications are either news or background articles, focusing on describing backsourcing cases. Additionally, we identified four recurring themes, namely reasons for backsourcing, presentation of survey results, discussion of industry trends, and backsourcing success stories. The main reasons identified to trigger backsourcing decisions are cost savings, quality improvements, and increasing control and flexibility. By comparing our findings with academic literature on IS backsourcing, we conclude that generally both literature types cover similar topics. However, researchers have a more formulative or interpretive focus than the often descriptive practitioner literature. Academic literature also examines a broader range of topics, while practitioner literature has a narrower focus. Additionally, we observe one difference regarding applied terminology: while researchers employ the term backsourcing, practitioners mostly use back in-house or insourcing. Our paper contributes to facilitating the exchange between academics and practitioners, presents topics to consider when aiming to increase practical relevance and provides researchers with concrete directions for future research within the field of IS backsourcing.
Information systems backsourcing describes the transferof previously outsourced activities, assets, or personnel back to the originating companyto regain ownership and control. While there is much research on information systems outsourcing, the topic of backsourcing information systemsis still an emerging research area. Therefore, ourpaper aims to explore and synthesizethe existing literature on information systemsbacksourcing, since there is no exhaustiveliteraturereview of the state of the research to our knowledge available yet. In this paper, wecreate a framework to structure the existing research along the overallbacksourcing process. We identifydifferent motivators, such asexpectation gaps, orinternal and external organizational changes, leading towards a backsourcing decision, and factors positively or negatively influencing this decision. Additionally, we derive implementation success factors based on the existing literature to guide companies through the backsourcing process.We also differentiate the term backsourcing from related, sometimes synonymously used terms, by emphasizing the change of ownership back to the company of originas the main criterion.Additionally, we discuss opportunities for future research in the field of information systems backsourcing.
Research on Shadow IT is facing a conceptual dilemma in cases where previously “covert” systems developed by business entities are integrated in the organizational IT management. These systems become visible, are thus not “in the shadows” anymore, and subsequently do not fit to existing definitions of Shadow IT. Practice shows that some information systems share characteristics of Shadow IT but are created openly in alignment with the IT organization. This paper proposes the term “Business-managed IT” to describe “overt” information systems developed or managed by business entities and distinguishes it from Shadow IT by illustrating case vignettes. Accordingly,our contribution is to suggest a concept and its delineation against other concepts. In this way, IS researchers interested in IT originated from or maintained by business entities can construct theories with a wider scope of application that are at the same time more specific to practical problems. In addition, the terminology allows to value potentially innovative developments by business entities more adequately.
Purpose
With the rise of digitization, IT organizations are challenged to provide efficient service delivery and offer innovative digital solutions while maintaining a constant resource capacity. To address this challenge, some IT organizations have adopted Lean Management (LM). Although LM is a standard production mode in manufacturing, it is less familiar to IT organizations. The purpose of this paper is to identify 12 lessons learned from companies who implemented LM in their IT organization (Lean IT) to free up their IT resource capacity from existing day-to-day operations so they could use it to enable their digitization strategy.
Design/methodology/approach
A case study of two major international companies from different industries. Data were collected from 25 structured interviews.
Findings
The lessons learned provide insights into how these companies implemented Lean IT, the potential outcomes they aimed for, what they did to achieve those outcomes, how they facilitated the implementation of Lean IT, and restrictions they encountered during the implementation.
Research limitations/implications
The findings are based on a limited range of IT organizations.
Practical implications
The lessons learned inform those implementing Lean IT because they explain how companies have implemented Lean IT to facilitate digitization and the benefits and pitfalls they encountered. A comparison of Lean IT and Lean Production shows that LM is transferable to IT organizations if domain specific requirements are respected.
Originality/value
This paper reports the unique experience of companies implementing Lean IT, which can inform other companies in a similar situation.