Digitalisierung
Refine
Year of publication
- 2020 (51) (remove)
Document Type
- conference proceeding (article) (22)
- Article (13)
- Part of a Book (6)
- conference proceeding (presentation, abstract) (3)
- conference proceeding (volume) (2)
- Preprint (2)
- Report (2)
- Book (1)
Is part of the Bibliography
- no (51)
Keywords
- Internet of Things (4)
- IoTAG (2)
- Quantum Annealin (2)
- Wirtschaftsinformatik (2)
- artificial intelligence (2)
- device identification (2)
- security rating (2)
- virtual reality (2)
- AR marker (1)
- Adenocarcinom (1)
Institute
- Fakultät Informatik und Mathematik (31)
- Labor für Digitalisierung (LFD) (9)
- Fakultät Elektro- und Informationstechnik (8)
- Laboratory for Safe and Secure Systems (LAS3) (7)
- Fakultät Maschinenbau (4)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (4)
- Labor Informationssicherheit und Complience (ISC) (4)
- Regensburg Center of Health Sciences and Technology - RCHST (4)
- Fakultät Sozial- und Gesundheitswissenschaften (3)
- Labor Finite-Elemente-Methode (FEM) (3)
Begutachtungsstatus
- peer-reviewed (31)
In the design of urban public spaces, the inclusion of diverse voices enhances the development of products and services by synchronising designer expertise with people’s preferences. Multiple participatory methods exist, each with their respective benefits and drawbacks in terms of the quality of results, time and cost needed for preparing and conducting studies, and knowledge required for participation. Providing more concrete representations of abstract or intangible design concepts would be beneficial for laypeople unfamiliar with design or the case study. We propose a Virtual Reality (VR) platform to discover subjective preferences on public waiting rooms through immersive design experiences. The VR platform was tested with 463 participants with variety in age and cultural background. Following a qualitative data analysis, we discuss the suitability of our VR platform for fostering inclusive participation and how it impacts the role of the designer, as well as propose design guidelines for future VR studies.
Die Automatisierung macht auch vor den öffentlichen Verkehrsmitteln nicht halt. Das Team Design for Autonomous Mobility von TUMCREATE stellt Ergebnisse aus der Designforschung vor, in der Virtual Reality und andere neue Methoden für das Design autonomer Verkehrsmittel untersucht werden. Das Team orientiert sich dabei stark am Menschen und versucht, allen Passagieren mehr Komfort und ein positives Reiseerlebnis zu bieten.
Towards user acceptance of autonomous vehicles: a virtual reality study on human-machine interfaces
(2020)
Technological advances in businesses related to automated transportation raise challenges. Indeed, ensuring safety for users is essential for the future commercial launch of this technology. Within this study, human-machine interfaces (HMIs) are evaluated regarding the communication between autonomous vehicles (AVs) and pedestrians in order to prepare a successful deployment of the technology on the market. Since real-life experiments involving AVs remain dangerous, experiments were conducted via virtual reality (VR). The results show that beyond the need for HMIs, display-based concepts were more usable than laser projections. The study impacts on: 1) substantiating the need for explicit HMIs on AVs (including concept recommendations) for a successful market entry; 2) proving that VR constitutes an advantageous alternative for conducting experiments in the field of autonomous transportation for both research and business. Further work is needed that involves more participants and an improved virtual environment.
Computer-Aided Design (CAD) constitutes an important tool for industrial designers. Similarly, Virtual Reality (VR) has the capability to revolutionize how designers work with its increased sense of scale and perspective. However, existing VR CAD applications are limited in terms of functionality and intuitive control. Based on a comparison of VR CAD applications, ImPro, a new application for immersive prototyping for industrial designers was developed. The user evaluations and comparisons show that ImPro offers increased usability, functionality, and suitability for industrial designers.
Mankind has always been confronted with limited knowledge about Nature and many people devoted their lives and intellects to the very question: How to gain knowledge, confirm ideas and theories, and understand the world we live in? Religion was one fundamental base for explanations and existential questions. Philosophy and Science were the other. Nowadays, science and technology dominate our daily lives, religion and metaphysics have lost their former dominance, while our thoughts and factual knowledge are increasingly governed by the digitised versions of conversation and discussion, filtered and guided by the algorithms of social networks. Today, "artificial intelligence" is a phenomenon in information technology which takes over everyday routines. A new ideal, a new promise, and at the same time used as a stratagem by political and economic systems to guide and manipulate our thinking, values, and behaviour. But nothing is so new that one could not find its precursors in earlier times. Science and mathematics has kindled many brilliant ideas. Can we describe the world by numbers, find truth and explanation in mathematical structures? This essay is devoted to early ideas by the medieval philosopher Ramón Llull, the great rationalist G.W. Leibniz, and early approaches to combinations, number theory. and information processing. Llull's thinking machine and subsequent endeavours to find methods of mechanical reasoning and inference paved the way to modern-day concepts of artificial intelligence.
Independent component analysis (ICA), being a data-driven method, has been shown to be a powerful tool for functional magnetic resonance imaging (fMRI) data analysis. One drawback of this multivariate approach is that it is not, in general, compatible with the analysis of group data. Various techniques have been proposed to overcome this limitation of ICA. In this paper, a novel ICA-based workflow for extracting resting-state networks from fMRI group studies is proposed. An empirical mode decomposition (EMD) is used, in a data-driven manner, to generate reference signals that can be incorporated into a constrained version of ICA (cICA), thereby eliminating the inherent ambiguities of ICA. The results of the proposed workflow are then compared to those obtained by a widely used group ICA approach for fMRI analysis. In this study, we demonstrate that intrinsic modes, extracted by EMD, are suitable to serve as references for cICA. This approach yields typical resting-state patterns that are consistent over subjects. By introducing these reference signals into the ICA, our processing pipeline yields comparable activity patterns across subjects in a mathematically transparent manner. Our approach provides a user-friendly tool to adjust the trade-off between a high similarity across subjects and preserving individual subject features of the independent components.
-Projekte planen und steuern mit Excel
-Mit Praxisbeispiel, Schritt für Schritt aufgebaut
-Termine, Kosten und Ressourcen im Griff
-Nützliche VBA-Makros für Projektmanager
-Business Intelligence-Berichte mit PowerQuery und Power BI Desktop Projekte planen, überwachen und steuern - das geht auch mit Excel in Microsoft 365. Ignatz Schels und Prof. Dr. Uwe M. Seidel sind erfahrene Projektmanager und Projektcontroller. Sie zeigen Ihnen, wie Sie das Kalkulationsprogramm von Microsoft für effizientes Projektmanagement nutzen können. Hier üben Sie an einem realen Projekt: Sie erstellen Checklisten, Projektstrukturen und Kostenpläne, überwachen Termine und Budgets und dokumentieren mit Infografiken und Diagrammen. Sie lernen mit den beiden Autoren die besten Funktionen und die wichtigsten Analysewerkzeuge von Excel kennen und programmieren Ihre ersten Makros mit der Makrosprache VBA. Projektmanagement mit Excel – probieren Sie es aus, es funktioniert! In der Neuauflage finden Sie praxisnahe Beispiele zu den BI-Tools PowerQuery, Power Pivot und Power BI sowie Tipps zu den aktuellsten Excel-Funktionen und -werkzeugen wie dynamische Arrays. Alle Beispiele, Tools und VBA-Makros stehen zum Download unter plus.hanser-fachbuch.de bereit.
In computer science, game-based learning is an exciting and entertaining way to learn a programming language or coding fundamentals. MonstER Park is a game which applies this concept to entityrelationship models (ERM) an teach it in an easy, fun, and effective way. The plot of the game is about a theme park named MonstER Park that is opening soon, but it’s not yet ready. The player of the game has to talk to little monsters and create an ER diagram step-by-step. The player gets instant feedback, and the game continues after correctly solving a task. On completion of a game, the player knows the following fundamentals of ER diagrams and can download a certificate: entity types, (recursive) relationships, (complex, multi-valued) attributes, (compound) primary keys, generalization. The game is free and available at https://www.monst-er.de without any registration.
Quantum computing is considered the “next big thing” when it comes to solving computational problems impossible to tackle using conventional computers. However, a major concern is that quantum computers could be used to crack current cryptographic schemes designed to withstand traditional cyberattacks. This threat also impacts future automated vehicles as they become embedded in a vehicle-to-everything (V2X) ecosystem. In this scenario, encrypted data is transmitted between a complex network of cloud-based data servers, vehicle-based data servers, and vehicle sensors and controllers. While the vehicle hardware ages, the software enabling V2X interactions will be updated multiple times. It is essential to make the V2X ecosystem quantum-safe through use of “post-quantum cryptography” as well other applicable quantum technologies. This SAE EDGE™ Research Report considers the following three areas to be unsettled questions in the V2X ecosystem: How soon will quantum computing pose a threat to connected and automated vehicle technologies? What steps and measures are needed to make a V2X ecosystem “quantum-safe?” What standardization is needed to ensure that quantum technologies do not pose an unacceptable risk from an automotive cybersecurity perspective?
Within many real-world networks, the links between pairs of nodes change over time. Thus, there has been a recent boom in studying temporal graphs. Recognizing patterns in temporal graphs requires a proximity measure to compare different temporal graphs. To this end, we propose to study dynamic time warping on temporal graphs. We define the dynamic tem- poral graph warping (dtgw) distance to determine the dissimilarity of two temporal graphs. Our novel measure is flexible and can be applied in various application domains. We show that computing the dtgw-distance is a challenging (in general) NP-hard optimization problem and identify some polynomial-time solvable special cases. Moreover, we develop a quadratic programming formulation and an efficient heuristic. In experiments on real-world data, we show that the heuristic performs very well and that our dtgw-distance performs favorably in de-anonymizing networks compared to other approaches.
Many industrially relevant problems can be deterministically solved by computers in principle, but are intractable in practice, as the seminal P/NP dichotomy of complexity theory and Cobham’s thesis testify. For the many NP-complete problems, industry needs to resort to using heuristics or approximation algorithms. For approximation algorithms, there is a more refined classification in complexity classes that goes beyond the simple P/NP dichotomy. As it is well known, approximation classes form a hierarchy, that is, FPTAS \subseteq PTAS \subseteq APX \subseteq NPO. This classification gives a more realistic notion of complexity but—unless unexpected breakthroughs happen for fundamental problems like P = NP or related questions— there is no known efficient algorithm that can solve such problems exactly on a realistic computer. Therefore, new ways of computations are sought. Recently, considerable hope was placed on the possible computational powers of quantum computers and quantum annealing (QA) in particular. However, the precise benefits of such a drastic shift in hardware are still unchartered territory to a good extent. Firstly, the exact relations between classical and quantum complexity classes pose many open questions, and secondly, technical details of formulating and implementing quantum algorithms play a crucial role in real-world applications. Guided by the hierarchy of classical optimisation complexity classes, we discuss how to map problems of each class to a quantum annealer. Those problems are the Minimum Multiprocessor Scheduling (MMS) problem, the Minimum Vertex Cover (MVC) problem and the Maximum Independent Set (MIS) problem. We experimentally investigate if and how the degree of approximability influences implementation and run-time performance. Our experiments indicate a discrepancy between classical approximation complexity and QA behaviour: Problems MIS and MVC, members of APX respectively PTAS, exhibit better solution quality on a QA than MMS, which is in FPTAS, even despite the use of preprocessing the for latter. This leads to the hypothesis that traditional classifications do not immediately extend to the quantum annealing domain, at least when the properties of real-world devices are taken into account. A structural reason, why FPTAS problems do not show good solution quality, might be the use of an inequlity in the problem description of the FPTAS problems. Formulating those inequalities on a quantum hardware (mostly done by formulating a Quadratic Unconstrained Binary optimisation (QUBO) problem in form of a matrix) requires a lot of hardware space which makes finding an optimal solution more difficult. Reducing the density of a QUBO is possible by appropriately pruning QUBO matrices. For the problems considered in our evaluation, we find that the achievable solution quality on a real-world machine is unexpectedly robust against pruning, often up to ratios as high as 50% or more. Since quantum annealers are probabilistic machines by design, the loss in solution quality is only of subordinate relevance, especially considering that the pruning of QUBO matrices allows for solving larger problem instances on hardware of a given capacity. We quantitatively discuss the interplay between these factors.
The advent of multi-core CPUs in nearly all embedded markets has prompted an architectural trend towards combining safety critical and uncritical software on single hardware units. We present a novel architecture for mixed criticality systems based on Linux that allows us to consolidate critical and uncritical parts onto a single hardware unit. CPU virtualisation extensions enable strict and static partitioning of hardware by direct assignment of resources, which allows us to boot additional operating systems or bare metal applications running aside Linux. The hypervisor Jailhouse is at the core of the architecture and ensures that the resulting domains may serve workloads of different criticality and can not interfere in an unintended way. This retains Linux’s feature-richness in uncritical parts, while frugal safety and real-time critical applications execute in isolated domains. Architectural simplicity is a central aspect of our approach and a precondition for reliable implementability and successful certification. While standard virtualisation extensions provided by current hardware seem to suffice for a straight forward implementation of our approach, there are a number of further limitations that need to be worked around. This paper discusses the arising issues, and evaluates the suitability of our approach for real-world safety and real-time critical scenarios.
We report the design and teaching experience of a Master-level seminar course on quantitative and empirical software engineering. The course combines elements of traditional literature seminars with active learning by scientific project work, in particular quantitative mixed-method analyses of open source systems. It also provides short introductions and refreshers to data mining and statistical analysis, and discusses the nature and practice of scientific knowledge inference. Student presentations of published research, augmented by summary reports, bridge to standard seminars. We discuss our educational goals and the course structure derived from them. We review research questions addressed by students in mini research reports, and analyse them as tokens on how junior-level software engineers perceive the potential of empirical software engineering research. We assess challenges faced, and discuss possible solutions.
Quantum computers have the potential of solving problems more efficiently than classical computers. While first commercial prototypes have become available, the performance of such machines in practical application is still subject to exploration. Quantum computers will not entirely replace classical machines, but serve as accelerators for specific problems. This necessitates integrating quantum computational primitives into existing applications. In this paper, we perform a case study on how to augment existing software with quantum computational primitives for the Boolean satisfiability problem (SAT) implemented using a quantum annealer (QA). We discuss relevant quality measures for quantum components, and show that mathematically equivalent, but structurally different ways of transforming SAT to a QA can lead to substantial differences regarding these qualities. We argue that engineers need to be aware that (and which) details, although they may be less relevant in traditional software engineering, require considerable attention in quantum computing.
Ascertaining the feasibility of independent falsification or repetition of published results is vital to the scientific process, and replication or reproduction experiments are routinely performed in many disciplines. Unfortunately, such studies are only scarcely available in database research, with few papers dedicated to re-evaluating published results. In this paper, we conduct a case study on replicating and reproducing a study on schema evolution in embedded databases. We can exactly repeat the outcome for one out of four database applications studied, and come close in two further cases. By reporting results, efforts, and obstacles encountered, we hope to increase appreciation for the substantial efforts required to ensure reproducibility. By discussing minutiae details required to ascertain reproducible work, we argue that such important, but often ignored aspects of scientific work should receive more credit in the evaluation of future research.
Many problems of industrial interest are NP-complete, and quickly exhaust resources of computational devices with increasing input sizes. Quantum annealers (QA) are physical devices that aim at this class of problems by exploiting quantum mechanical properties of nature. However, they compete with efficient heuristics and probabilistic or randomised algorithms on classical machines that allow for finding approximate solutions to large NP-complete problems. While first implementations of QA have become commercially available, their practical benefits are far from fully explored. To the best of our knowledge, approximation techniques have not yet received substantial attention. In this paper, we explore how problems' approximate versions of varying degree can be systematically constructed for quantum annealer programs, and how this influences result quality or the handling of larger problem instances on given set of qubits. We illustrate various approximation techniques on both, simulations and real QA hardware, on different seminal problems, and interpret the results to contribute towards a better understanding of the real-world power and limitations of current-state and future quantum computing.
Public development processes are a key characteristic of open source projects. However, fixes for vulnerabilities are usually discussed privately among a small group of trusted maintainers, and integrated without prior public involvement. This is supposed to prevent early disclosure, and cope with embargo and non-disclosure agreement (NDA) rules. While regular development activities leave publicly available traces, fixes for vulnerabilities that bypass the standard process do not.
We present a data-mining based approach to detect code fragments that arise from such infringements of the standard process. By systematically mapping public development artefacts to source code repositories, we can exclude regular process activities, and infer irregularities that stem from non-public integration channels. For the Linux kernel, the most crucial component of many systems, we apply our method to a period of seven months before the release of Linux 5.4. We find 29 commits that address 12 vulnerabilities. For these vulnerabilities, our approach provides a temporal advantage of 2 to 179 days to design exploits before public disclosure takes place, and fixes are rolled out.
Established responsible disclosure approaches in open development processes are supposed to limit premature visibility of security vulnerabilities. However, our approach shows that, instead, they open additional possibilities to uncover such changes that thwart the very premise. We conclude by discussing implications and partial countermeasures.
Appropriate daily brushing of the teeth is important for preventing oral diseases. Therefore, a personal assistant for assessing and coaching appropriate toothbrushing is needed. Herein, a three-dimensional toothbrush position measurement method using augmented reality (AR) markers is proposed. The AR markers are detected via a brushing video captured using a smartphone camera. The AR markers are installed on each surface of a dodecahedron attached at the rear end of the toothbrush. This report describes the proposed method, the resulting toothbrush position measurement accuracy, and the optimal number of markers needed for an accurate measurement of the position.
Over the past few years, ontology merging, and ontology semantic alignment has gained significant interest as research topics in automotive application domain for finding solutions to semantic data heterogeneity. To accomplish the complex and novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication), etc. the automotive applications involve collaborations of several platform-specific data from heterogeneous enterprises component frameworks and consequently there has been increase in data interoperability issues. At the application component level, data interoperability relies on the semantic alignment or mapping between the various component framework interfaces data models represented as XML schemas (XSD). With the XML schemas being the preferred standard for the interface description exchange between most of the automotive application domain components, however, the data interoperability between the semantically equivalent but structurally different data constructs of multiple heterogeneous XSDs stands as a challenge in the absence of an ontology-based approach. To confront this crucial requirement for data interoperability and to increase in effect the reuse of existing components through their interfaces, we propose an approach to semantically map the various component framework interface data models when expressed as ontology schemas, based on the exploration of semantic synergies. The transformation between XSD and RDF (Resource Description Framework) schema representations and the use of queries over the ontology schemas for semantic mapping are demonstrated including a real-world case study.
Internet of Things (IoT) devices are critical to operate and maintain, because of their number and high connectivity.
A lot of security issues concern IoT devices and the networks they
are integrated. To help getting an overview of an IoT network,
the devices and the security, we propose a scoring system to get
a good impression of IT security. This system generates single
scores for each device, using features like encryption, update
behavior, etc. Furthermore, a summarized score for the whole
network is calculated, to show the status of the network security
in an easy way for the administrator. To enable the scoring
system, a precise list of the existing devices and their operating
status is necessary. To achieve this, we present an open standard
for the IoT Device IdentificAtion and RecoGnition (short IoTAG),
which requires that devices report, e.g., their name, an unique ID,
the firmware version and the supported encryption. The proposed
standard is described in detail and an implementation guideline
is given in this paper. Additionally, information about how to
realize the serialization, the integrity and the communication
with IoTAG.
To ensure the secure operation of IoT devices in the
future, they must be continuously monitored. This starts with
an inventory of the devices, checking for a current software
version and extends to the encryption algorithms and active
services used. Based on this information, a security analysis and
rating of the whole network is possible. To solve this challenge
in the growing network environments, we present a proposal for
a standard. With the IoT Device IdentificAtion and RecoGnition
(IoTAG), each IoT device reports its current status to a central
location as required and provides information on security. This
information includes a unique ID, the exact device name, the
current software version, active services, cryptographic methods
used, etc. The information is signed to make misuse more difficult
and to ensure that the device can always be uniquely identified.
In this paper, we introduce IoTAG in detail and describe the
necessary requirements.
Increasing cyber-attacks on Internet of Things (IoT) environments are a growing problem of digitized households worldwide. The purpose of this study is to investigate how an intelligent Intrusion Detection System (iIDS) can provide more security in IoT networks with a novel architecture, combining
multiple classical and machine learning approaches. By combining classical security analysis methods and modern concepts of artificial intelligence, we increase the quality of attack detection and can therefore conduct dedicated attack suppression. The architectural image of the iIDS consists of different layers, which in parts achieve self-sufficient results. The results of
the different modules are calculated by means of statement variables and evaluation techniques adapted for the individual module elements and subsequently combined by limit value considerations. The architecture image combines approaches for the analysis and processing of IoT network traffic and
evaluates it to an aggregated score. From this result it can be determined whether the analyzed data indicates device misuse or attempted break-ins into the network. This study answers the questions whether a connection between classical and modern concepts for monitoring and analyzing IoT network traffic can be implemented meaningfully within a reliable architecture of an
iIDS.
We analyze sparse frame based regularization of inverse problems by means of a diagonal frame decomposition (DFD) for the forward operator, which generalizes the SVD. The DFD allows to define a non-iterative (direct) operator-adapted frame thresholding approach which we show to provide a convergent regularization method with linear convergence rates. These results will be compared to the well-known analysis and synthesis variants of sparse ℓ1-regularization which are usually implemented thorough iterative schemes. If the frame is a basis (non-redundant case), the three versions of sparse regularization, namely synthesis and analysis variants of ℓ1-regularization as well as the DFD thresholding are equivalent. However, in the redundant case, those three approaches are pairwise different.
The Internet of Things (IoT) is widely used as a
synonym for nearly every connected device. This makes it really
difficult to find the right kind of scientific publication for the
intended category of IoT. Conferences and other events for
IoT are confusing about the target group (consumer, enterprise,
industrial, etc.) and standardisation organisations suffer from
the same problem. To demonstrate these problems, this paper
shows the results of an analyses over IoT publications in different
research libraries. The number of results for IoT, consumer,
enterprise and industrial search queries were evaluated and a
manual study about 100 publications was done. According to
the research library or search engine, different results about
the distribution of consumer-, enterprise- and industrial- IoT
are visible. The comparison with the results of the manual
evaluation shows that some search queries do not show all desired
publications or that considerably more, unwanted results are
returned. Most researchers do not use the keywords right and
the exact category of IoT can only be accessed via the abstract.
This shows major problems with the use of the term IoT and its
minor limitations.
This paper describes the development of a tilting locomotion system based on a compliant tensegrity structure with multiple stable equilibrium configurations. A tensegrity structure featuring 4 stable equilibrium states is considered. The mechanical model of the structure is presented and the according equations of motion are derived. The variation of the length of selected structural members allows to influence the prestress state and the corresponding shape of the tensegrity structure. Based on bifurcation analyses a reliable actuation strategy to control the current equilibrium state is designed. In this work, the tensegrity structure is assumed to be in contact with a horizontal plane due to gravity. The derived actuation strategy is utilized to generate tilting locomotion by successively changing the equilibrium state. Numerical simulations are evaluated considering the locomotion characteristics. In order to validate this theoretical approach a prototype is developed. Experiments regarding to the equilibrium configurations, the actuation strategy and the locomotion characteristics are evaluated using image processing tools and motion capturing. The results verify the theoretical data and confirm the working principle of the investigated tilting locomotion system. This approach represents a feasible actuation strategy to realize a reliable tilting locomotion utilizing the multistability of compliant tensegrity structures.
Während KPIs und Dashboards in großer Zahl für den Betrieb eines Unternehmens im eingeschwungenen Zustand zur Verfügung stehen, scheint es insbesondere in der digitalen Transformation besonders schwierig zu sein, Orientierung zu behalten und zu vermitteln. Welche Aspekte sind dabei wichtig? Was ist der Einfluss der Unternehmenskultur? In welcher Reihenfolge sollen wir vorgehen? Wie können wir die wichtigen Parameter messen und visualisieren? Auf diese Fragen geben wir unter Berücksichtigung unterschiedlicher Modelle und Perspektiven Antworten und zeigen am hypothetischen Beispiel eines Autohauses mögliche konkrete Aktionen exemplarisch auf.
Der Einzug Künstlicher Intelligenz (KI) in die Medizin scheint angesichts der Nutzenpotenziale unvermeidlich. Durch den Agentencharakter KI-basierter Systeme ergeben sich teils neuartige normative An- und Herausforderungen. Für den hochgradig sensiblen Anwendungsbereich der Medizin erscheint es daher notwendig, den KI-Einsatz mit ethischen Leitlinien einzuhegen. Dies wirft die Frage auf, auf welche Erfahrungsbasis eine ethische Fundierung des Einsatzes KI-basierter Technik gestellt werden könnte. Damit ist kein Schluss vom Sein auf das Sollen gemeint, sondern die Berücksichtigung bereits geführter normativer Debatten. Eine Möglichkeit, sich der normativen Landschaft der KI anzunähern, liegt in der Auseinandersetzung mit der Entwicklungsgeschichte der KI und den damit verbundenen Debatten um ethische und soziale Aspekte. Mit diesem explorativen Ansatz können relevante Problemfelder identifiziert, vorläufige Gestaltungs- und Einsatzempfehlungen für KI-Systeme in der Praxis formuliert und Vorschläge zu deren Einbettung in existierende Organisationsstrukturen generiert werden.
The characteristic feature of inverse problems is their instability with respect to data perturbations. In order to stabilize the inversion process, regularization methods have to be developed and applied. In this work we introduce and analyze the concept of filtered diagonal frame decomposition which extends the standard filtered singular value decomposition to the frame case. Frames as generalized singular system allows to better adapt to a given class of potential solutions. In this paper, we show that filtered diagonal frame decomposition yield a convergent regularization method. Moreover, we derive convergence rates under source type conditions and prove order optimality under the assumption that the considered frame is a Riesz-basis.
In this paper compliant multistable tensegrity structures with discrete variable stiffness are investigated. The different stiffness states result from the different prestress states of these structures corresponding to the equilibrium configurations. Three planar tensegrity mechanisms with two stable equilibrium configurations are considered exemplarily. The overall stiffness of these structures is characterized by investigations with regard to their geometric nonlinear static behavior. Dynamical analyses show the possibility of the change between the equilibrium configurations and enable the derivation of suitable actuation strategies.
The use of mechanically prestressed compliant structures in soft robotics is a recently discussed topic. Tensegrity structures, consisting of a set of rigid disconnected compressed members connected to a continuous net of prestressed elastic tensioned members build one specific class of these structures. Robots based on these structures have manifold shape changing abilities and can adapt their mechanical properties reversibly by changing of their prestress state according to specific tasks.
In the paper selected aspects on the potential use of elastomer materials in these structures are discussed with the help of theoretical analysis. Therefore, a selected basic tensegrity structure with elastomer members is investigated focusing on the stiffness and shape changing ability in dependence of the nonlinear hyperelastic behavior of the used elastomer materials. The considered structure is compared with a conventional tensegrity structure with linear elastic tensioned members. Finally, selected criterions for the advantageous use of elastomer materials in compliant tensegrity robots are discussed.
Background:
Fitbit and Garmin motion tracker devices are highly used in research. The validity and reliability of these devices is proven for healthy adults between 18 and 64.
Objectives:
Comparing data output of two devices.
Methods:
Observational case study on a test track and in the domestic environment of a 80- year-old female multimorbide geriatric patient.
Results:
High significant correlation of the devices on the test track [r=.776, p≤.001, Bca-CI-95% (.618;.874), N=33], but significant different in the domestic environment over time (z=4.840, p≤.001).
Conclusion:
The dominant/non-dominant body side and further sources of error may play a role in monitoring steps with these devices.
Code reviews are an essential part of quality assurance in modern software projects. But despite their great importance, they are still carried out in a way that relies on human skills and decisions. During the last decade, there have been several publications on code reviews using eye tracking as a method, but only a few studies have focused on the performance differences between experts and novices. To get a deeper understanding of these differences, the following experiment was developed: This study surveys expertise-related differences in experts’, advanced programmers’, and novices’ eye movements during the review of eight short C++ code examples, including correct and erroneous codes. A sample of 35 participants (21 novices, 14 advanced and expert programmers) were recruited. A Tobii Spectrum 600 was used for the data collection. Measures included participants’ eye movements during the code review, demographic background data, and cued retrospective verbal comments on replays of their own eye movement recordings. Preliminary results give proof for experience-related differences between participants. Advanced and expert programmers performed significantly better in case of error detection and the eye tracking data implies a more efficient reviewing strategy.
Tutorial on Software Engineering Education in Co-Located Multi-User Eye-Tracking-Environments
(2020)
We briefly describe a tutorial on the application of Eye-Tracking technology for Software Engineering Education. We will showcase our setup of a large-scale Eye-Tracking-Classroom and its usage for real-time improvement of traditional learning scenarios in Software Engineering Education. We will focus on the integration of gaze data into modern integrated development environments (IDEs) and demonstrate a complete workflow for its usage in co-located multi-user Eye-Tracking-Environments.
UML (Unified Modeling Language) is the current de facto as well as de jure standard (ISO/IEC 19505:2012) notation to visualize models in software development. UML provides essential guidelines and rules to visualize and understand complex software systems. This is the reason why it has become part of curricula for software engineering courses at many universities worldwide. It is well known, however, that UML is hard to grasp for novices, mainly due to its complexity. In order to tackle the problem of teaching UML to novice students appropriately, it is inevitable to understand their needs and problems much better than we do now. This paper presents empirical insights into students' problems when developing common UML diagrams. Identified problems are generalized, giving rise to a problem catalogue that is derived from our empirical findings, thus establishing a basis for addressing these problems through focused learning arrangements.
In the beginning of every security analysis or penetration test of a system, information about the target has to be gathered. On IT-Systems a port scan is usually performed as a first step of an investigation. Since the communication protocols differ in automotive systems, generic port scanning tools can’t be used for a security analysis of CANs.
More complex protocols have a higher likelihood of implementation errors and bugs. On CAN networks, such payloads are transferred through International Standard Transport Protocol (ISO-TP) communication. We designed a new methodology to identify ISO-TP endpoints in automotive networks. Every of these endpoints can provide exploitable application layer protocols and therefor has to be considered during penetration testing and security analysis.
We contribute a new scan approach for the automated evaluation of possible attack surfaces in automotive CAN networks which has a higher coverage and multiple advantages than state of the art approaches.
PURPOSE
IT outsourcing (ITO) has developed into an established practice for organizations but the interorganizational and oftentimes international collaboration it involves comes at a price: Reports from academia and practice suggest that more than 25% of all ITO projects fail, many because of cultural differences between client and provider organizations. Against this background, this paper analyzes the complex nature of cultural distance and its multi-faceted effect on ITO success.
DESIGN/METHODOLOGY/APPROACH
This paper builds upon extant literature on culture on the national, organizational and team level, conceptualizes its effect on relationship quality and ITO success, and hypothesizes a model on potential moderators and management techniques to offset culture-induced challenges. It then evaluates and refines the model by means of an interpretive qualitative research design for an in-depth single-case study of ProSiebenSat.1 Media SE (P7S1), a leading European media company that reconfigured its IT sourcing model three times in 10 years.
FINDINGS
The results from interviews with top managers from client and provider organizations represent one of the first integrated views on the critical importance of cultural compatibility on multiple levels, provide manifold examples for its complex effect on ITO success, as well as moderators and potential management techniques to promote ITO success.
RESEARCH LIMITATIONS/IMPLICATIONS
This paper contributes relevant empirical insights to the growing body of literature on culture and its underestimated role in ITO success. It builds on tentative theory that is confirmed and refined.
PRACTICAL IMPLICATIONS
The paper helps in substantiating the complex and intangible nature of culture and demonstrates means for its effective management.
ORIGINALITY/VALUE
The results from interviews with top managers from client and provider organizations represent one of the first integrated views on the critical importance of cultural compatibility on multiple levels, provide manifold examples for its complex effect on ITO success, as well as moderators and potential management techniques to promote ITO success.
Teletherapie beinhaltet vielfältige synchrone und asynchrone Möglichkeiten zur digitalen Versorgung von PatientInnen. Die Videotherapie stellt hierbei ein wichtiges synchrones Angebot dar, mit dem das therapeutische Angebot erweitert werden kann. Internationale Studien und auch Studien aus Deutschland bestätigen das Potenzial des Einbezugs digitaler Maßnahmen in die therapeutische Versorgung. In diesem Beitrag werden die Ergebnisse einer qualitativen Studie präsentiert, in der die in der COVID-19-Krise gesammelten Erfahrungen von TherapeutInnen mit Videotherapie vorgestellt werden.
Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection.
Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network. The secondary correction network learns on the labeled data to optimally spot correct predictions, as well as to amend incorrect ones. As auxiliary regularization term, the corrector directly influences the supervised training of the segmentation network. On unlabeled data, the output of the correction network is essential to create a proxy for the unknown truth. The corrector’s output is combined with the segmentation network’s prediction to form the new target. We propose a loss function that incorporates both the pseudo-labels as well as the predictive certainty of the correction network. Our approach can easily be added to supervised segmentation models. We show consistent improvements over a supervised baseline on experiments on both the Pascal VOC 2012 and the Cityscapes datasets with varying amounts of labeled data.
Shadow IT describes covert/hidden IT systems that are managed by business entities themselves. Additionally, there are also overt forms in practice, so-called Business-managed IT, which share most of the characteristics of Shadow IT. To better understand this phenomenon, we interviewed 29 executive IT managers about positive and negative cases of Shadow IT and Business-managed IT. By applying qualitative comparative analysis (QCA), we derived four conditions that characterize these cases: Aligned, local, simple, and volatile. The results show that there are three sufficient configurations of conditions that lead to a positive outcome; one of them even encompasses Shadow IT. The most important solution indicates that IT systems managed by business entities are viewed as being positive if they are aligned with the IT department and limited to local requirements. This allows to balance local responsiveness to changing requirements and global standardization. In contrast, IT systems that are not aligned and permanent (and either organization-wide or simple) are consistently considered as negative. Our study is the first empirical quantitative–qualitative study to shed light on the success and failure of Shadow IT and Business-managed IT.
In the era of digitalization, companies review how to build competitive advantage drawing on their capabilities. In this context, the paper revisits the concept of IT capabilities. We summarize the “canonical” body of literature to characterize how leading scholars conceptualized and differentiated IT capabilities and then we show how new types of IT capabilities (for example, rising from SMACIT technologies) relate to this. We find that the resource-based view is still leading in how to conceptualize IT capabilities. However, an alternative perspective on IT assets is emerging, looking at it from the angle of digital technologies as stacks. Digital technologies form the foundation for a new technology-driven perspective on IT capabilities. This new view complements the established differentiation of IT capabilities considering IT infrastructure flexibility, IT management, and IT personnel capability. Both perspectives describe the explorative and exploitative nature of IT capabilities. This paper helps scholars and practitioners to clearly distinguish different perspectives of IT capabilities on how to build a competitive advantage from IT today.
Drei Veränderungen charakterisieren den demografischen Wandel in Deutschland: steigende durchschnittliche Lebenserwartung, der Anteil der Alten und Hochbetagten wächst und der Anteil der jüngeren Generationen schrumpft. Konsequenzen sind ein Zuwachs chronisch-degenerativer und pflegeintensiver Erkrankungen, ein schrumpfendes Arbeitskräftepotenzial und vor der Überlastung stehende soziale Sicherungssysteme. Abhilfe soll durch verstärkten Technikeinsatz in der Pflege, insbesondere durch Roboter, geleistet werden: Roboter sollen Pflegekräfte ergänzen oder ersetzen, um dem Pflegenotstand zu begegnen und die Kosten der Pflege zu senken. Damit gehen aber zahlreiche Herausforderungen einher, die nicht zuletzt ethische Fragen berühren: Eine durch Technik bzw. Roboter gestützte Pflege wird diese in ihrem Kern verändern, Professions- und Menschenbilder werden infrage gestellt und moralische Ansprüche tangiert. Der Beitrag zeichnet diese Gemengelage nach und skizziert ein diskursethisch ausgerichtetes Verfahren (MEESTAR) zur Aushandlung der Bedingungen für den Robotereinsatz in der Pflege- und Gesundheitsversorgung.
Cybersecurity in Health Care
(2020)
Ethical questions have always been crucial in health care; the rapid dissemination of ICT makes some of those questions even more pressing and also raises new ones. One of these new questions is cybersecurity in relation to ethics in health care. In order to more closely examine this issue, this chapter introduces Beauchamp and Childress’ four principles of biomedical ethics as well as additional ethical values and technical aims of relevance for health care. Based on this, two case studies—implantable medical devices and electronic Health Card—are presented, which illustrate potential conflicts between ethical values and technical aims as well as between ethical values themselves. It becomes apparent that these conflicts cannot be eliminated in general but must be reconsidered on a case-by-case basis. An ethical debate on cybersecurity regarding the design and implementation of new (digital) technologies in health care is essential.
In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2020 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m.