Digitalisierung
Refine
Year of publication
- 2015 (40) (remove)
Document Type
Is part of the Bibliography
- no (40)
Keywords
- Information systems (4)
- Information technology (3)
- Safety (3)
- Application development (2)
- Application maintenance (2)
- Betriebliches Informationssystem (2)
- Informationstechnik (2)
- Infrastructure services (2)
- Java (2)
- Lean IT (2)
Institute
- Fakultät Informatik und Mathematik (26)
- Fakultät Elektro- und Informationstechnik (11)
- Laboratory for Safe and Secure Systems (LAS3) (11)
- Fakultät Maschinenbau (4)
- Regensburg Strategic IT Management (ReSITM) (4)
- Labor eHealth (eH) (2)
- Labor Informationssicherheit und Complience (ISC) (1)
- Labor für Digitalisierung (LFD) (1)
Begutachtungsstatus
- peer-reviewed (20)
This paper addresses the problem of properly placing a given task in the manipulator workspace by a heuristic and numeric approach. Thus, the task is placed relatively to the manipulator for each element of the discretized workspace and the required joint torques are determined. The results are are by a torque-based optimization criterion. The modularity of this approach ensures general applicability on various systems and tasks while the high computational effort is treated by GPU parallelization. The method is presented for a given 6DOF manipulator and a highly dynamic trajectory. The resulting interactive map of the manipulator workspace gives an overview of the task dependent dynamic performance, detailed evaluation of certain solutions will show the dexterity of the proposed approach.
EMDLAB: A toolbox for analysis of single-trial EEG dynamics using empirical mode decomposition
(2015)
Background:
Empirical mode decomposition (EMD) is an empirical data decomposition technique. Recently there is growing interest in applying EMD in the biomedical field.
New method:
EMDLAB is an extensible plug-in for the EEGLAB toolbox, which is an open software environment for electrophysiological data analysis.
Results:
EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on EEG data. In addition, EMDLAB is a user-friendly toolbox and closely implemented in the EEGLAB toolbox.
Comparison with existing methods:
EMDLAB gains an advantage over other open-source toolboxes by exploiting the advantageous visualization capabilities of EEGLAB for extracted intrinsic mode functions (IMFs) and Event-Related Modes (ERMs) of the signal.
Conclusions:
EMDLAB is a reliable, efficient, and automated solution for extracting and visualizing the extracted IMFs and ERMs by EMD algorithms in EEG study.
Today, ubiquitous mobile devices have not only arrived but entered the safety critical domain. There, systems are about to be controlled where human health or even human life is put at risk. For example, in automation systems first ideas surface to control parts of the system via a COTS smartphone. Another example is the idea to control the autonomous parking function of a car via a COTS smartphone too. As beneficial and convenient these ideas are on the first thought, on the second thought, dangers of these approaches become obvious. Especially in case of failures the system’s safety has to be maintained. The open question is how to achieve this mandatory requirement with COTS components, e.g. smartphones that are not developed following the development process necessary for safetycritical systems. This paper presents a concept to reliably detect human interaction while activating safety critical functions via COTS mobile devices. Thus a means is provided to detect erroneous activation requests for the safetycritical function.
The performance of cognitive models often depends on the settings of specific model parameters, such as the rate of memory decay or the speed of motor responses. The systematic exploration of a model’s parameter space can yield relevant insights into model behavior and can also be used to improve the fit of a model to human data. However, exhaustive parameter space searches quickly run into a combinatorial explosion as the number of parameters investigated increases. Taking an established instance-based learning task as example, we show
how simulation using parallel computing and derivative-free optimization methods can be applied to investigate the effects
of different parameter settings. We find that both global optimization methods involving genetic algorithms as well as local methods yield satisfactory results in this case. Furthermore, we show how a model implemented in a specific cognitive architecture (ACT-R) can be mathematically reformulated to prepare the application of derivative-based optimization methods which promise further efficiency gains for quantitative analysis.
PURPOSE
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography.
METHODS
In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization.
RESULTS
Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection.
CONCLUSIONS
The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
A Step Towards the Automated Diagnosis of Parkinson's Disease: Analyzing Handwriting Movements
(2015)
Parkinson’s disease (PD) has affected millions of people world-wide, being its major problem the loss of movements and, consequently, the ability of working and locomotion. Although we can find several works that attempt at dealing with this problem out there, most of them make use of datasets composed by a few subjects only. In this work, we present some results toward the automated diagnosis of PD by means of computer vision-based techniques in a dataset composed by dozens of patients, which is one of the main contributions of this work. The dataset is part of a joint research project that aims at extracting both visual and signal-based information from healthy and PD patients in order to go forward the early diagnosis of PD patients. The dataset is composed by handwriting clinical exams that are analyzed by means of image processing and machine learning techniques, being the preliminary results encouraging and promising. Additionally, a new quantitative feature to measure the amount of tremor of an individual’s handwritten trace called Mean Relative Tremor is also presented.
Artifacts in Incomplete Data Tomography with Applications to Photoacoustic Tomography and Sonar
(2015)
We develop a paradigm using microlocal analysis that allows one to characterize the visible and added singularities in a broad range of incomplete data tomography problems. We give precise characterizations for photoacoustic and thermoacoustic tomography and sonar, and provide artifact reduction strategies. In particular, our theorems show that it is better to arrange sonar detectors so that the boundary of the set of detectors does not have corners and is smooth. To illustrate our results, we provide reconstructions from synthetic spherical mean data as well as from experimental photoacoustic data.
We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford–Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp–Logan phantom from seven angular views only. We illustrate the practical applicability on a real positron emission tomography dataset. As further applications, we consider spherical Radon data as well as blurred data.
The goal of this paper is to increase the computation speed of MapReduce jobs by reducing the accuracy of the result. Often, the timely processing is more important than the precision of the result. Hadoop has no built-in functionality for such an approximation technique, so the user has to implement sampling techniques manually.
We introduce an automatic system for computing arithmetic approximations. The sampling is based on techniques from statistics and the extrapolation is done generically. This system is also extended by an incremental component which enables the reuse of already computed results to enlarge the sampling size. This can be used iteratively to further increase the sampling size and also the precision of the approximation. We present a transparent incremental sampling approach, so the developed components can be integrated in the Hadoop framework in a non-invasive manner.
Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design
(2015)
Discrete ill-posed problems are often encountered in engineering applications. Still, their sound analysis is not yet common practice and difficulties arising in the determination of uncertain parameters are typically not assigned properly. This contribution provides a tutorial review on methods for identifiability analysis, regularization techniques and optimal experimental design. A guideline for the analysis and classification of nonlinear ill-posed problems to detect practical identifiability problems is given. Techniques for the regularization of experimental design problems resulting from ill-posed parameter estimations are discussed. Applications are presented for three different case studies of increasing complexity.
Bierdeckelsalto
(2015)
Ein beliebtes Spiel besteht darin, einen auf einer Tischkante liegenden Bierdeckel von unten mit den ausgestreckten Fingern hochzuschnellen und dann nach einem oder mehreren Saltos zwischen Finger und Daumen wieder aufzufangen. Physikalisch gesehen, übt man einen Stoß auf den Bierdeckel aus. Die Anwendung von Impuls- und Drehimpulssatz führt zu einfachen Abschätzungen zur Mechanik des Bierdeckelsaltos. Mit Physik-Simulationsprogrammen lässt sich dieses Experiment nachvollziehen. Highspeed-Videos ergänzen Theorie und Simulation.
In building software-as-a-service applications, a flexible development environment is key to shipping early and often. Therefore, schema-flexible data stores are becoming more and more popular. They can store data with heterogeneous structure, allowing for new releases to be pushed frequently, without having to migrate legacy data first. However, the current application code must continue to work with any legacy data that has already been persisted in production. To let legacy data structurally "catch up" with the latest application code, developers commonly employ object mapper libraries with life-cycle annotations. Yet when used without caution, they can cause runtime errors and even data loss. We present ControVol, an IDE plugin that detects evolutionary changes to the application code that are incompatible with legacy data. ControVol warns developers already at development time, and even suggests automatic fixes for lazily migrating legacy data when it is loaded into the application. Thus, ControVol ensures that the structure of legacy data can catch up with the structure expected by the latest software release.
We address a practical challenge in agile web development against NoSQL data stores: Upon a new release of the web application, entities already persisted in production no longer match the application code. Rather than migrating all legacy entities eagerly (prior to the release) and at the cost of application downtime, lazy data migration is a popular alternative: When a legacy entity is loaded by the application, all pending structural changes are applied. Yet correctly migrating legacy data from several releases back, involving more than one entity at-a-time, is not trivial. In this paper, we propose a holistic Datalog ¬non-rec model model for reading, writing, and migrating data. In implementing our model, we may blend established Datalog evaluation algorithms, such as an incremental evaluation with certain rules evaluated bottom-up, and certain rules evaluated top-down with sideways information passing. Our systematic approach guarantees that from the viewpoint of the application, it remains transparent whether data is migrated eagerly or lazily.
We consider the task of building Big Data software systems, offered as software-as-a-service. These applications are commonly backed by NoSQL data stores that address the proverbial Vs of Big Data processing: NoSQL data stores can handle large volumes of data and many systems do not enforce a global schema, to account for structural variety in data. Thus, software engineers can design the data model on the go, a flexibility that is particularly crucial in agile software development. However, NoSQL data stores commonly do not yet account for the veracity of changes when it comes to changes in the structure of persisted data. Yet this is an inevitable consequence of agile software development. In most NoSQL-based application stacks, schema evolution is completely handled within the application code, usually involving object mapper libraries. Yet simple code refactorings, such as renaming a class attribute at the source code level, can cause data loss or runtime errors once the application has been deployed to production. We address this pain point by contributing type checking rules that we have implemented within an IDE plug in. Our plug in ControVol statically type checks the object mapper class declarations against the code release history. ControVol is thus capable of detecting common yet risky cases of mismatched data and schema, and can even suggest automatic fixes.
Effective software engineering demands a coordinated effort. Unfortunately, a comprehensive view on developer coordination is rarely available to support software-engineering decisions, despite the significant implications on software quality, software architecture, and developer productivity. We present a fine-grained, verifiable, and fully automated approach to capture a view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. Compared to previous work, our approach is fine-grained, and identifies statistically significant communities using order-statistics and a community-verification technique based on graph conductance. To demonstrate the scalability and generality of our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the authenticity of inferred community structure with respect to reality. Our results indicate that developers of open-source projects form statistically significant community structures and this particular view on collaboration largely coincides with developers' perceptions of real-world collaboration.
Modeling, identification and control of an antagonistically actuated joint for telerobotic systems
(2015)
Within this paper a modeling, identification and control technique for an antagonistically actuated joint consisting of two pneumatically actuated muscles is presented. The antagonistically actuated joint acts as a test bench for control architectures which are going to be used to control an exoskeleton within a telerobotic system. A static and dynamic model of the muscle and the joint is derived and the parameters of the models are identified using a least-squares algorithm. The control architecture, consisting of a inner pressure and an outer position controller is presented. The pressure controller is evaluated using switching valves compared against proportional valves.
Building scalable web applications on top of NoSQL data stores is becoming common practice. Many of these data stores can easily be accessed programmatically, and do not enforce a schema. Software engineers can design the data model on the go, a flexibility that is crucial in agile software development. The typical tasks of database schema management are now handled within the application code, usually involving object mapper libraries. However, today’s Integrated Development Environments (IDEs) lack the proper tool support when it comes to managing the combined evolution of the application code and of the schema. Yet simple refactorings such as renaming an attribute at the source code level can cause irretrievable data loss or runtime errors once the application is serving in production. In this demo, we present ControVol, a framework for controlled schema evolution in application development against NoSQL data stores. ControVol is integrated into the IDE and statically type checks object mapper class declarations against the schema evolution history, as recorded by the code repository. ControVol is capable of warning of common yet risky cases of mismatched data and schema. ControVol is further able to suggest quick fixes by which developers can have these issues automatically resolved.
We present a novel technique for rendering depth of field that addresses difficult overlap cases, such as close, but out-of-focus, geometry in the near-field. Such scene configurations are not managed well by state-of-the-art post-processing approaches since essential information is missing due to occlusion. Our proposed algorithm renders the scene from a single camera position and computes a layered image using a single pass by constructing per-pixel lists. These lists can be filtered progressively to generate differently blurred representations of the scene. We show how this structure can be exploited to generate depth of field in real-time, even in complicated scene constellations.
Rendering performance is an everlasting goal of computer graphics and significant driver for advances in both, hardware architecture and algorithms. Thereby, it has become possible to apply advanced computer graphics technology even in low-cost embedded appliances, such as car instruments. Yet, to come up with an efficient implementation, developers have to put enormous efforts into hardware/problem-specific tailoring, fine-tuning, and domain exploration, which requires profound expert knowledge. If a good solution has been found, there is a high probability that it does not work as well with other architectures or even the next hardware generation. Generative DSL-based approaches could mitigate these efforts and provide for an efficient exploration of algorithmic variants and hardware-specific tuning ideas. However, in vertically organized industries, such as automotive, suppliers are reluctant to introduce these techniques as they fear loss of control, high introduction costs, and additional constraints imposed by the OEM with respect to software and tool-chain certification. Moreover, suppliers do not want to share their generic solutions with the OEM, but only concrete instances. To this end, we propose a light-weight and incremental approach for meta programming of graphics applications. Our approach relies on an existing formulation of C-like languages that is amenable to meta programming, which we extend to become a lightweight language to combine algorithmic features. Our method provides a concise notation for meta programs and generates easily sharable output in the appropriate C-style target language.
With ever increasing ray traversal and hierarchy construction performance the application of ray tracing to problems often tackled by rasterization-based algorithms is becoming a viable alternative. This is especially desirable as the ground truth for these algorithms is often determined by using ray tracing and thus directly applying it is the simplest way to generate images satisfying the reference. In this paper we propose a very efficient pre-process to speed up the construction and traversal of sub-optimal, but fast-to-build hierarchies used for interactive ray tracing and show how it can be applied to shadow rays in a hybrid environment, where ray tracing is used to sample area lights for scene positions found and shaded via rasterization.
A common way to ray trace subdivision surfaces is by constructing and traversing spatial hierarchies on top of tessellated input primitives. Unfortunately, tessellating surfaces requires a substantial amount of memory storage, and involves significant construction and memory I/O costs. In this paper, we propose a lazy-build caching scheme to efficiently handle these problems while also exploiting the capabilities of today's many-core architectures. To this end, we lazily tessellate patches only when necessary, and utilize adaptive subdivision to efficiently evaluate the underlying surface representation. The core idea of our approach is a shared lazy evaluation cache, which triggers and maintains the surface tessellation. We combine our caching scheme with SIMD-optimized subdivision primitive evaluation and fast hierarchy construction over the tessellated surface. This allows us to achieve high ray tracing performance in complex scenes, outperforming the state of the art while requiring only a fraction of the memory. In addition, our method stays within a fixed memory budget regardless of the tessellation level, which is essential for many applications such as movie production rendering. Beyond the results of this paper, we have integrated our method into Embree, an open source ray tracing framework, thus making interactive ray tracing of subdivision surfaces publicly available.
In this paper, we introduce a novel technique for pre-filtering multi-layer shadow maps. The occluders in the scene are stored as variable-length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre-filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre-filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real-time even for complex scenes and difficult setups. Our results demonstrate that our pre-filtered maps are general and particularly scalable.
In this paper we present a scheduling approach for safety critical, fault tolerant, multicore real-time embedded systems. For this kind of systems, not only the correctness of a computed result but also the strict adherence to timing requirements of computation is essential to avoid any kind of damage. To react to unpredictable, arbitrary hardware faults suitable error detection mechanisms have to be applied. The caused error itself and the detection and correction have great impact on the system's timing behavior. To still keep the real-time requirements, the used scheduling algorithm has to ensure maximum flexibility to disturbances of the timing. The group of Proportionate Fair (Pfairness) multicore scheduling algorithms has been proven to create an optimal schedule in polynomial time. The contribution of this paper is a Pfair-based algorithm that uses tight coupling between the error detection mechanisms and the scheduler of the real-time operating system to establish a loop-back connection.
Measuring competencies may serve as a feedback mechanism as well as a judgment device for a lecturer. As measuring every competency from a catalogue of competencies is not very viable, the to-be-measured competencies are grouped in competency profiles. Further, assessment practices are shown and applied to a course in a study program. A discussion of useful practices concludes this contribution.
Safety of embedded systems has the highest priority because it helps contribute to customer confidence and thereby ensures growth of the new markets, like electromobility. In series production fail-safe systems as well as fault-tolerant systems are realized with redundant hardware concepts like dual core microcontrollers running in lock-step-mode to reach highest safety requirements given by standards, like ISO 26262 or IEC 61508. In contrast to the hardware redundancy approach, there are also approaches available with information-, time-and/or software-redundancy since several years. One of them is known as coded processing or AN-codes. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software. But the breakthrough of coded processing never took place. One reason for this seem to be the myths which are widely propagated on this subject and the hereby associated uncertainties. In this paper some myths are busted, like the usage of prime numbers as transformation factor A, the myth that greater transformation factors are better or the myth about the residual error probability defined as 1/A. Some of them have been propagated since 1989. The aim of this paper is to provide more clarity and understanding for this technique, perhaps to pave the way for further functional safety concepts based on coded processing approaches.
Traditional methods rely on Static Timing Analysis techniques to compute the Worst Case Response Time for tasks in real-time systems. Multi-Core real-time systems are faced up with concurrent task executions, semaphore accesses, and task migrations where it may be difficult to obtain the worst case upper bound. A new three staged probabilistic estimation concept is presented. Worst Case Response Times are estimated for tasksets which consist of tasks with multiple time bases. The concept involves data generation with sample classification and sample size equalization, model fit and Worst Case Response Time estimation on the basis of extreme value distribution models. A Generalized Pareto Distribution model fit method which includes threshold detection and parameter estimation is also presented. Sample classification in combination with the new Generalized Pareto Distribution model fit method allows to estimate Worst Case Response Times with low pessimism ranges compared to estimation methods that uses the Generalized Pareto or the Gumbel max distribution without sample classification.
Automatische Multicore-Echtzeitvalidierung – Ein Prozess für modellbasierte Softwareentwicklung
(2015)
Considering the claim of furthering self-directed learning in higher education in general and in Software Engineering education in particular, this paper deals with a new approach on understanding and facilitating self-directed learning. This approach involves the concept of subjective theories, which are expected to influence students' self-directed learning. Therefore this paper presents the intended qualitative research design for reconstructing these subjective theories and for developing ways of integrating them in didactical situations in higher education and especially in Software Engineering education.
Within this thesis, we present our suggestions why playful learning in software engineering education is useful to mediate generic competences in academic teaching. Therefor we identified competences which are addressed by playful learning and mapped them to demanded generic competences in software engineering. Due to the well compliance, we analyzed current implementations of playful learning and their design regarding the mediation of required soft skills. Based on the lack of effective implementation, we close our paper with an exemplary design for playful learning.
Mögliche Auswirkungen der eIDAS-Verordnung auf die Telematikinfrastruktur und die eArztausweise
(2015)
IS offshoring research regarding knowledge transfer processes, roles involved, as well as influencing factors is characterized by its diverse and heterogeneous nature. Covering the last fifteen years of IS offshoring research, this paper provides a consolidated view of the field of study. It presents a generic knowledge transfer process consisting of four stages and five milestones. These stages are characterized and evaluated according to their relevance for knowledge transfer, the types of knowledge transferred, the main activities and methods for transfer and testing, as well as the goals pursued. Furthermore, we aggregate the diverse literature findings relating to individuals who facilitate knowledge transfer processes into a general role. We label this role “offshore coordinator” and present its core tasks and necessary skills. In addition, we identify and cluster core factors that influence success or failure of knowledge transfer. In summary, our study answers calls to discontinue with empirically derived definitions in favor of theory-based conceptualization of the IS offshoring research field with respect to knowledge transfer processes, roles, and success factors. Future studies can build on these results and examine questions with respect to particular characteristics of knowledge transfer processes and their influencing success and failure factors.
Lean Management has been successfully applied by companies around the world, mainly in production/manufacturing functions. Recently, the interest to investigate a wider application of Lean Management especially in service functions increased. However, it is not clear how Lean Management can be applied to IT organizations. Therefore, this study aims to provide an overview on common characteristics and future research directions. A literature review on existing scholarly research from January 2004 to June 2014 is conducted. Using a database-driven search approach, a total of 1,206 research contributions are found of which 49 were identified as relevant. Results indicate a low theory grounding of mostly formulative and interpretative research items. This implies that research on Lean Management of IT organizations is still at its nascent state. Content-wise, five research themes emerge. The majority of research investigates IT organizations in a role to support Lean Management in production/manufacturing functions (determining “what to work on”), therefore more research on how Lean Management can be applied to IT organizations themselves (determining “how to work”) could be beneficial. Future research could also try to build on Change Management theories, as the implementation of Lean Management is of transformational character.
Lean Management has been successfully applied in production/manufacturing functions since more than four decades. Recently, the interest to investigate Lean Management also in service functions increased. Therefore, this study aims to (1) Consolidate critical success factors (CSFs) for the implementation of Lean Management of IT organizations; and (2) Describe a theoretical foundation for these CSFs. With respect to (1) a database-driven search was conducted. CSFs then were extracted and categorized. In total 13 CSF groups were assigned to three dimensions: Mindset and behavior; Organization and skills; and Process facilitation and performance management. To understand underlying mechanisms better, and with respect to (2) we related existing (IS) theory to identified groups of CSFs. Especially, five theoretical concepts are discussed: Absorptive capacity, Agency theory, Cognitive dissonance, Dynamic capabilities, and System dynamics. Future research needs to validate the results of (1) and (2) empirically in IT functions.
An increasing amount of IS offshoring research has been published during the last four years. This paper presents a comprehensive view of the field of study from a managerial point of view. It provides a consolidated view of the field of study between 2010 and 2013 based on a manual search comprising 69 selected journals and 9 conferences, as well as a search using 6 journal databases. The literature review ensures continuity of research by connecting to a comprehensive literature analysis covering the years 1999 to 2009. This way it consolidates and critically reflects the state of the research of the last 14 years. Overall, we compiled 95 relevant publications originating from leading IS journals and IS conference proceedings. The results indicate that IS offshoring research is largely non-theory based, using almost entirely empirical data and interpretive research methods and -to a smaller extent -positivist research designs. The ISO research of the last 14 years focuses on the implementation stages "how" and "outcome" while the pre-implementation-stages “why”, “what”, and “which” are comparatively sparsely researched. Future studies should apply a more theory-driven approach with a greater attention on pre-implementation aspects of information systems offshoring. In addition, future research should investigate the special nature of near- and onshoring, captive offshoring as well asagile (project) management techniques suitable for ISO.