Digitalisierung
Refine
Year of publication
- 2016 (38) (remove)
Document Type
Has Fulltext
- no (38)
Is part of the Bibliography
- no (38) (remove)
Keywords
- Datenschutz (2)
- Datensicherung (2)
- Informationstechnik (2)
- Lean IT (2)
- Lean Management (2)
- Literaturbericht (2)
- 3D scanning (1)
- AUTOSAR (1)
- Abfragesprache (1)
- Application development and maintenance (1)
Institute
- Fakultät Informatik und Mathematik (27)
- Fakultät Elektro- und Informationstechnik (9)
- Laboratory for Safe and Secure Systems (LAS3) (8)
- Regensburg Strategic IT Management (ReSITM) (6)
- Labor für Digitalisierung (LFD) (5)
- Fakultät Maschinenbau (3)
- Labor eHealth (eH) (2)
- Regensburg Center of Biomedical Engineering - RCBE (2)
- Labor Elektroakustik (1)
- Labor Informationssicherheit und Complience (ISC) (1)
Begutachtungsstatus
- peer-reviewed (21)
- begutachtet (1)
Simultaneous EEG-fMRI provides an increasingly attractive research tool to investigate cognitive processes with high temporal and spatial resolution. However, artifacts in EEG data introduced by the MR scanner still remain a major obstacle. This study, employing commonly used artifact correction steps, shows that head motion, one overlooked major source of artifacts in EEG-fMRI data, can cause plausible EEG effects and EEG–BOLD correlations. Specifically, low-frequency EEG (< 20 Hz) is strongly correlated with in-scanner movement. Accordingly, minor head motion (< 0.2 mm) induces spurious effects in a twofold manner: Small differences in task-correlated motion elicit spurious low-frequency effects, and, as motion concurrently influences fMRI data, EEG–BOLD correlations closely match motion-fMRI correlations. We demonstrate these effects in a memory encoding experiment showing that obtained theta power (~ 3–7 Hz) effects and channel-level theta–BOLD correlations reflect motion in the scanner. These findings highlight an important caveat that needs to be addressed by future EEG-fMRI studies.
In this work, a method for reducing the number of degrees of freedom in online optimal dynamic experiment design problems for systems described by differential equations is proposed. The online problems are posed such that only the inputs which extend an operation policy resulting from an experiment designed offline are optimized. This is done by formulating them as multiple experiment designs, considering explicitly the information of the experiment designed offline and possible time delays unknown a priori. The performance of the method is shown for the case of the separation of isopropanolol isomers in a Simulated Moving Bed plant.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
The method of loci is one, if not the most, efficient mnemonic encoding strategy. This spatial mnemonic combines the core cognitive processes commonly linked to medial temporal lobe (MTL) activity: spatial and associative memory processes. During such processes, fMRI studies consistently demonstrate MTL activity, while electrophysiological studies have emphasized the important role of theta oscillations (3–8 Hz) in the MTL. However, it is still unknown whether increases or decreases in theta power co-occur with increased BOLD signal in the MTL during memory encoding. To investigate this question, we recorded EEG and fMRI separately, while human participants used the spatial method of loci or the pegword method, a similarly associative but nonspatial mnemonic. The more effective spatial mnemonic induced a pronounced theta power decrease source localized to the left MTL compared with the nonspatial associative mnemonic strategy. This effect was mirrored by BOLD signal increases in the MTL. Successful encoding, irrespective of the strategy used, elicited decreases in left temporal theta power and increases in MTL BOLD activity. This pattern of results suggests a negative relationship between theta power and BOLD signal changes in the MTL during memory encoding and spatial processing. The findings extend the well known negative relation of alpha/beta oscillations and BOLD signals in the cortex to theta oscillations in the MTL.
NoSQL-Datenbanksysteme sind in den letzten Jahren sehr populär geworden, gute Gründe sprechen für ihren Einsatz: Eine attraktive Eigenschaft vieler Systeme ist ihre Schema-Flexibilität, die insbesondere in der agilen Anwendungsentwicklung Vorteile bietet. Durch horizontale Skalierbarkeit ermöglichen NoSQL-Datenbanksysteme eine effiziente Verarbeitung großer Datenmengen. Einige Systeme, die für die Datenhaltung interaktiver Anwendungen konzipiert sind, können zudem hochfrequente Nutzeranfragen bedienen. Diesen Vorteilen stehen eine Reihe von Nachteilen gegenüber, aus denen sich neue Herausforderungen für die Anwendungsentwicklung ergeben: Fehlende Standards bei den Anfragesprachen erschweren die Entwicklung datenbanksystemunabhängiger Anwendungen. Schema-Flexibilität im Datenbankmanagementsystem führt dazu, dass die Verantwortung für das Schema-Management in die Anwendung verlagert wird. Im vorliegenden Beitrag werden wesentliche Herausforderungen identifiziert und Lösungsansätze aus Forschung und Praxis vorgestellt. Dabei liegt der Fokus auf schema-flexiblen NoSQL-Datenbanksystemen, mit einem aggregat-orientierten Datenmodell, d. h. Key-Value Datenbanksysteme, dokumentenorientierten Datenbanksystemen und Column-Family Datenbanksystemen.
NoSQL data stores have become very popular over the last years, as good reasons are justifying their application: One attractive feature of many systems is their schema flexibility, which may be preferable in agile software development projects. Due to their horizontal scalability, NoSQL data stores make it possible to efficiently process large amounts of data. Some systems, designed as data backends for interactive applications, can also manage highly frequent user requests. Apart from these advantages, there are also downsides to NoSQL data stores that create new challenges for software development: Missing standards in query languages make it difficult to build data store independent applications. Schema flexibility in the data store shifts the responsibility for schema management into the application. This article identifies substantial challenges as well as solution statements from research and practice. The focus of our survey is on schema-flexible NoSQL data management systems with an aggregate-oriented data model, i. e., key-value data management systems, as well as document and column family data management systems.
When an incremental release of a web application is deployed, the structure of data already persisted in the production database may no longer match what the application code expects. Traditionally, eager schema migration is called for, where all legacy data is migrated in one go. With the growing popularity of schema-flexible NoSQL data stores, lazy forms of data migration have emerged: Legacy entities are migrated on-the-fly, one at-a-time, when they are loaded by the application. In this demo, we present Datalution, a tool demonstrating the merits of lazy data migration. Datalution can apply chains of pending schema changes, due to its Datalog-based internal representation. The Datalution approach thus ensures that schema evolution, as part of continous deployment, is carried out correctly.
As camera and projector hardware gets more and more affordable and software algorithms more sophisticated, the area of application for camera-projector configurations widens its scope. Unlike for sole camera calibration, only few comparative surveys for projector calibration methods exist. Therefore, in this paper, two readily available algorithms for the calibration of those arrays are studied and methods for the evaluation of the results are proposed. Additionally, statistical evaluations under consideration of different influencing factors like the hardware arrangement, the number of input images or the calibration target characteristics on the accuracy of the calibration results are performed. Ground truth comparison data is realized through a robotic system and structured light 3D scanning.
Abstract Social network analysis is extremely well supported by the R community and is routinely used for studying the relationships between people engaged in collaborative activities. While there has been rapid development of new approaches and metrics in this field, the challenging question of validity (how well insights derived from social networks agree with reality) is often difficult to address. We propose the use of several R packages to generate interactive surveys that are specifically well suited for validating social network analyses. Using our web-based survey application, we were able to validate the results of applying community-detection algorithms to infer the organizational structure of software developers contributing to open-source projects.
In big data software engineering, the schema flexibility of NoSQL document stores is a major selling point: When the document store itself does not actively manage a schema, the data model is maintained within the application. Just like object-relational mappers for relational databases, object-NoSQL mappers are part of professional software development with NoSQL document stores. Some mappers go beyond merely loading and storing Java objects: Using dedicated evolution annotations, developers may conveniently add, remove, or rename attributes from stored objects, and also conduct more complex transformations. In this paper, we analyze the dissemination of this technology in Java open source projects. While we find evidence on GitHub that evolution annotations are indeed being used, developers do not employ them so much for evolving the data model, but to solve different tasks instead. Our observations trigger interesting questions for further research.
Software evolution is a fundamental process that transcends the realm of technical artifacts and permeates the entire organizational structure of a software project. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. By applying a network-analytic approach, we found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which core developers are hierarchically arranged and peripheral developers are not. Our results suggest that the organizational structure of large projects is constrained to evolve towards a state that balances the costs and benefits of developer coordination, and the mechanisms used to achieve this state depend on the project’s scale.
Modifications to open-source software (OSS) are often provided in the form of "patch stacks" -- sets of changes (patches) that modify a given body of source code. Maintaining patch stacks over extended periods of time is problematic when the underlying base project changes frequently. This necessitates a continuous and engineering-intensive adaptation of the stack. Nonetheless, long-term maintenance is an important problem for changes that are not integrated into projects, for instance when they are controversial or only of value to a limited group of users.
We present and implement a methodology to systematically examine the temporal evolution of patch stacks, track non-functional properties like integrability and maintainability, and estimate the eventual economic and engineering effort required to successfully develop and maintain patch stacks. Our results provide a basis for quantitative research on patch stacks, including statistical analyses and other methods that lead to actionable advice on the construction and long-term maintenance of custom extensions to OSS.
n this paper we present a method to efficiently cull large parts of a scene prior to shadow map computations for many-lights settings. Our method is agnostic to how the light sources are generated and thus works with any method of light distribution. Our approach is based on previous work in culling for ray traversal to speed up area light sampling. Applied to shadow mapping our method works for high- and low-resolution shadow maps and, in contrast to previous work on many-lights rendering, does neither entail scene approximations nor imposes limits on light range, while still providing significant gains in performance. In contrast to standard culling methods shadow map rendering itself is sped up by a factor of 1.5 to 8.6 while the speedup of shadow map rendering, lookup and shading together ranges from 1.1 to 4.2
We present a method to compute post-processing depth of field (DOF) that produces more accurate results than previous approaches. Our method is based on existing approaches, namely DOF rendering by splatting and fast, tile-based particle accumulation. Using tile-based accumulation allows us to correctly sort out of focus pixels and apply proper alpha-blending to avoid artifacts commonly encountered with filter-based depth of field methods.
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on-the-fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this paper, we present a novel solution to this problem. We propose a compression scheme for a-priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
Over the last decade a number of high performance, domain-specific languages (DSLs) have started to grow and help tackle the problem of ever diversifying hard- and software employed in fields such as HPC (high performance computing), medical imaging, computer vision etc. Most of those approaches rely on frameworks such as LLVM for efficient code generation and, to reach a broader audience, take input in C-like form. In this paper we present a DSL for image processing that is on-par with competing methods, yet its design principles are in strong contrast to previous approaches. Our tool chain is much simpler, easing the burden on implementors and maintainers, while our output, C-family code, is both adaptable and shows high performance. We believe that our methodology provides a faster evaluation of language features and abstractions in the domains above.
In recent years, substantial progress has been made in the field of reverberant speech signal processing, including both single- and multichannel dereverberation techniques and automatic speech recognition (ASR) techniques that are robust to reverberation. In this paper, we describe the REVERB challenge, which is an evaluation campaign that was designed to evaluate such speech enhancement (SE) and ASR techniques to reveal the state-of-the-art techniques and obtain new insights regarding potential future research directions. Even though most existing benchmark tasks and challenges for distant speech processing focus on the noise robustness issue and sometimes only on a single- channel scenario, a particular novelty of the REVERB challenge is that it is carefully designed to test robustness against reverberation, based on both real, single- channel, and multichannel recordings. This challenge attracted 27 papers, which represent 25 systems specifically designed for SE purposes and 49 systems specifically designed for ASR purposes. This paper describes the problems dealt within the challenge, provides an overview of the submitted systems, and scrutinizes them to clarify what current processing strategies appear effective in reverberant speech processing.
In this paper we show how a feature-oriented development methodology can be exploited to investigate a large set of possible implementations for a real-time rendering algorithm. We rely on previously published work to explore potential dimensions of the implementation space of an algorithm to be run on a graphics processing unit (GPU) using CUDA. The main contribution of our paper is to provide a clear example of the benefit to be gained from existing methods in a domain that only slowly moves toward higher level abstractions. Our method employs a generative approach and makes heavy use of Common Lisp-macros before the code is ultimately transformed to CUDA.
Background and Objective: Even today, pointing out an exam that can diagnose a patient with Parkinson's disease (PD) accurately enough is not an easy task. Although a number of techniques have been used in search for a more precise method, detecting such illness and measuring its level of severity early enough to postpone its side effects are not straightforward. In this work, after reviewing a considerable number of works, we conclude that only a few techniques address the problem of PD recognition by means of micrography using computer vision techniques. Therefore, we consider the problem of aiding automatic PD diagnosis by means of spirals and meanders filled out in forms, which are then compared with the template for feature extraction.
Methods: In our work, both the template and the drawings are identified and separated automatically using image processing techniques, thus needing no user intervention. Since we have no registered images, the idea is to obtain a suitable representation of both template and drawings using the very same approach for all images in a fast and accurate approach.
Results: The results have shown that we can obtain very reasonable recognition rates (around approximate to 67%), with the most accurate class being the one represented by the patients, which outnumbered the control individuals in the proposed dataset.
Conclusions: The proposed approach seemed to be suitable for aiding in automatic PD diagnosis by means of computer vision and machine learning techniques. Also, meander images play an important role, leading to higher accuracies than spiral images. We also observed that the main problem in detecting PD is the patients in the early stages, who can draw near-perfect objects, which are very similar to the ones made by control patients. (C) 2016 Elsevier Ireland Ltd. All rights reserved.
Programming GPUs with low-level libraries like CUDA and OpenCL is a tedious and error-prone task. Fortunately, algorithmic skeletons can shield developers from the complexity of parallel programming by encapsulating common parallel computing patterns. However, this simplification typically constrains programmers to write their applications using the GPU library employed by the skeleton implementation. In this work, we combine skeletal programming with modeldriven software development (MDSD) to increase the freedom of choice regarding the employed GPU library instead of leaving all technical decisions to the skeleton implementation. We present a code-generator that transforms models comprising skeletons, their input data and input functions to parallel C++ code while taking care of data-offset calculations. The generator has been tested using different GPU and multi-GPU communication libraries such as Thrust and CUDA-MPI. We demonstrate our novel approach to GPU programming with two example applications: affinity propagation and n-body simulation.
In logical circuits, like arithmetic operations in a processor system, arbitrary faults become a more tremendous aspect in future. Modern manufacturing processes lead to less reliability and higher vulnerability of software execution to soft-errors. The correctness of certain results is important especially for safety–critical applications whose reliability depends on the fault-free execution of each single instruction and the dependencies between them. The more complex a software is the more unreliable the outcome is. But, there is a contrary effect. If the probability for multiple faults increases, there is also the chance that two faults compensate each other and the result is correct again. This paper presents the basic ideas for such a reliability evaluation of a software's data flow with arbitrary soft-errors and the effect of fault compensation. Further, this evaluation provides a possibility to compare different implementations of a data flow with respect to the reliability. This is shown by the comparison of two different error codes as alternatives for coded data processing.
Program comprehension and the ability to find program errors are key skills of software engineering. The aim of this pilot study was to examine the visual processes of novice and advanced programmers in authentic tasks. Fifteen novices and eight advanced programmers were given eight short pieces of code. Their task was to either identify an error or give the output of the code. Eye movements and keyboard activity were recorded. On average, the novices spent more time reading the code than composing the response, whereas the more advanced programmers started composing the response sooner and spent more time on it. In general, the advanced programmers had shorter fixations and saccades. The results suggest that the advanced programmers are quicker to grasp the essence of the code and able to see more details in it. The advanced programmers had shorter fixations and saccade lengths during the second phase which might indicate the process of chunking.
Teaching software testing is a challenging task. Especially if you want to impart more in-depth and practical knowledge to the students. Therefore, most lectures still teach in a classic lecture format despite the fact that this way of instruction is in any case the optimal way of instruction for today's requirements anymore. In this paper we present our implementation of an active learning method to deepen the knowledge in academic software test education. We describe a card game for advanced learning that promotes students' collaboration and knowledge exchange in a playful and competitive manner. The design of the game is based on constructive and cooperative theories. A subsequent evaluation shows that the use of this card game for teaching software testing is a suitable method.
In this paper we present our first steps in defining the type, scope and relevance of writing in higher education of software engineering. We aim to identify lacks of scientific research and raise a new and necessary research interest to push research in this area. First we clarify the relevance of writing in higher education in general. In a second step we highlight the relevance of writing in the domain of software engineering in particular. Soft skills to be taught to students of engineering professions and especially to software engineering students are highly discussed. We discuss the skill of writing from a theoretical view as well as reasons for the high relevance of this skill for future engineers. An obligation of teaching writing in the higher education is formulated.
Modelling approaches have to satisfy certain criteria in order to sufficiently encompass the characteristics of dependable heterogenous multi- and many-core system architectures. This work-in-progress paper gives an overview of modern modelling approaches and their related research projects, particularly those regarding domain specific architecture description languages, as well as of the specific challenges of dependable systems and heterogenous multi- and many-core designs, i.e. scheduling techniques for real-time requirements and concerns regarding functional safety. Furthermore, an ongoing research effort in order to identify a set of criteria for evaluating the eligibility of modelling approaches for the task of adequately representing these systems and their specific characteristics is presented.
With the availability of the AUTOSAR standard, model-driven methodologies are becoming established in theautomotive domain. However, the process of creating models ofexisting system components is often difficult and time consuming, especially when legacy code has to be re-used or informationabout the exact timing behavior is needed. In order to tackle thisreverse engineering problem, we present CoreTAna, a novel toolthat derives an AUTOSAR compliant model of a real-time systemfrom a dynamic analysis of its trace recordings. This paper givesan overview of CoreTAna's current features and discusses itsbenefits for reverse engineering.
A cultural change at the university eco-system is possible with diverse learning approaches in faculties. Diverse learning offers will cope with the diversity of students regarding their value systems. Currently teaching at universities is dominated by "teacher-centered teaching", also there are approaches to use different methods to accelerate and intensify the teaching and learning process. Nevertheless these approaches do often not show the desired impact with all students. This paper is offering insights how that comes using the Graves value systems model and is proposing a set of methods which fits to different value systems of students.
In this research, we investigate the possibility of applying ranking task activity in teaching and learning software engineering courses. We introduce three types of ranking tasks, conceptual-, contextual- and sequential ranking questions, which cover most core topics such as requirement analysis, architecture design and quality validation in the course. We have also done experiments on a group of students to see if ranking tasks could increase their conceptual knowledge in specific areas. Assessments were given in order to evaluate the effectiveness of this activity, showing an obvious increase in complex conceptual understanding.
Medical confidentiality is very important because it builds the fundament of trust between doctors and patients. In modern health services data protection is essential. But there are also other important security objectives, such as integrity and authenticity of medical data and health services, as their breach can potentially lead to life threatening conditions. State of the art security mechanisms are necessary to protect medical data and services and prevent attacks known as “hacking”. They should include in particular cryptography, as you can usually rely on mathematics, more than in software security and access control mechanisms. As patients normally cannot assess the security of a service, security audits or certifications of health services should be provided to generate trust and confidence.
Shadow IT is a widespread phenomenon which includes systems, services, and processes that are not part of the "official" corporate IT. The topic is still an emerging research area and slowly gaining traction in recent years but a state of the art analysis is missing to date. We therefore conduct a structured literature review to derive a framework for causes, consequences, and governance of Shadow IT. We identify motivators, enablers, and missing barriers as causes for Shadow IT, both positive and negative consequences on an organizational and a technical level and governance approaches which focus on clearance of existing or prevention of future Shadow IT. Our review reveals that constructivist, qualitative design using a case study approach dominates existing research. There are no dominant theories yet to explain Shadow IT. We also highlight possibilities for future research to focus on the less explored governance aspect.
Ranking-type Delphi is a frequently used method in IS research. However, besides several studies investigating a rigorous application of ranking-type Delphi as a research method, a comprehensive and precise step-by-step guide on how to conduct a rigorous ranking-type Delphi study in IS research is currently missing. In addition, a common critic of Delphi studies in general is that it is unclear if there is indeed authentic consensus of the panelists, or if panelists only agree because of other reasons (e.g. acquiescence bias or tiredness to disagreement after several rounds). This also applies to ranking-type Delphi studies. Therefore, this study aims to (1) Provide a rigorous step-by-step guide to conduct ranking-type Delphi studies through synthesizing results of existing research and (2) Offer an analytical extension to the ranking-type Delphi method by introducing Best/Worst Scaling, which originated in Marketing and Consumer Behavior research. A guiding example is introduced to increase comprehensibility of the proposals. Future research needs to validate the step-by-step guide in an empirical setting as well as test the suitability of Best/Worst Scaling within described research contexts.
In a comprehensive literature review, we identified 21 different terms used for Shadow IT related concepts. This variety makes it difficult to identify related research and build upon it. To address this ambiguity, we reduce the different terms to six distinct concepts by developing a taxonomy and examining their relation¬ships. We do so by using a rigorous iterative methodology to identify common characteristics and to classify terms along them. By clustering the results, we derive and visualize the taxonomy. The identified concepts are Feral Practices, Workarounds, Shadow IT, Shadow Systems, Un-enacted Projects, and Shadow Sourcing. We elaborate on the concepts along their characteristics and clearly define and delimit them. As a result, we create a guide for their usage, increase search- and comparability, and unify existing knowledge.
An appreciation of globalisation issues by information systems (IS) graduates is a growing requirement of organisations with global reach. This paper discusses the planning of a joint course to be delivered online and face to face by two lecturers representing two tertiary institutes in Germany and New Zealand respectively. The IS offshoring content of the course is relatively unique and aspects of globalisation will be demonstrated through the dual country delivery of the course. The two lecturers involved have spent time lecturing and studying in their global partner’s country and tertiary institutes and so this joint delivery method is likely to also strengthen cultural ties between the two participating institutions including the students involved. Further research goals and issues are discussed for post-delivery of this dual country blended course. This paper provides a basis for this concept to be evaluated in further research.
Lean Management has been successfully implemented in production organizations since several decades. The study at hand investigates the implementation of Lean Management to IT organizations (Lean IT). The study offers three contributions: First, it explains on a conceptual level how Lean Management can be transferred from production to IT organizations (philosophy, principles, tools). Second, it provides a theoretical perspective on why Lean IT can be beneficial for IT organizations (IT Slack theory). Third, it provides insights to the stated research questions (three benefits and three propositions) from an initial case study of an internal IT service provider for a large international insurance company (> US$25 Billion revenue; >20,000 employees; active in >120 countries) and lays out the research methodology and potential focus areas for further studies.
In der Produktion gilt „Lean Management“ als einer der de-facto Standardmanagementansätze. Lean Management in IT-Organisationen (Lean IT) ist dahingegen in der Praxis weniger verbreitet und in der Wissenschaft kaum erforscht. Dieser Beitrag aggregiert und erweitert die Ergebnisse mehrerer Forschungsarbeiten. Dabei werden vorrangig zwei Aspekte diskutiert: (1) Ein mögliches Einführungsmodell für Lean IT. Dieses verknüpft fünf Rollen (Sponsor, Programmleiter, Navigator, Linienführungskraft und Linienexperte) mit vier Phasen (Vorbereitung, Analyse, Gestaltung und Implementierung). (2) Besondere Herausforderungen für Linienführungskräfte, die aufgrund der „bottom up“-Ausrichtung bei Lean-IT-Einführungen stark gefordert werden. Neben einer klaren Vision für die Organisationseinheit und dem Verständnis, an welcher Stelle Lean IT die Organisationseinheit konkret unterstützen kann, benötigen sie Offenheit, Veränderungswillen und die Bereitschaft Verantwortung an Mitarbeiter zu delegieren. Außerdem sollten sie über ein ausreichendes Zeitbudget für die Einführung verfügen, um ihrer gestaltenden und qualitätssichernden Funktion nachkommen zu können.