Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (343)
- Article (173)
- conference proceeding (presentation, abstract) (45)
- Part of a Book (35)
- Book (13)
- Preprint (11)
- conference proceeding (volume) (4)
- Report (4)
- Doctoral Thesis (3)
- Moving Images (3)
Is part of the Bibliography
- no (644)
Keywords
- Offshoring (13)
- Betriebliches Informationssystem (12)
- Informationstechnik (11)
- Datenschutz (10)
- Digitalisierung (9)
- Datensicherung (8)
- Elektronische Gesundheitskarte (8)
- Information systems (8)
- Internet of Things (8)
- Literaturbericht (8)
Institute
- Fakultät Informatik und Mathematik (363)
- Fakultät Elektro- und Informationstechnik (219)
- Laboratory for Safe and Secure Systems (LAS3) (205)
- Labor für Digitalisierung (LFD) (84)
- Regensburg Strategic IT Management (ReSITM) (53)
- Labor eHealth (eH) (35)
- Fakultät Maschinenbau (30)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (25)
- Labor Parallele und Verteilte Systeme (23)
- Labor Datenkommunikation (18)
Begutachtungsstatus
- peer-reviewed (257)
- begutachtet (7)
Evelin – ein Forschungsprojekt zur systematischen Verbesserung des Lernens von Software Engineering
(2012)
Skeletons are common patterns of parallelism, such as farm and pipeline, that can be abstracted and offered to the application programmer as programming primitives. We describe the use and implementation of skeletons on emerging computational grids, with the skeleton system Lithium, based on Java and RMI, as our reference programming syttem. Our main contribution is the exploration of optimization techniques for implementing skeletons on grids based on an optimized, future-based RMI mechanism, which we integrate into the macro-dataflow evaluation mechanism of Lithium. We discuss three optimizations: 1) a lookahead mechanism that allows to process multiple tasks concurrently at each grid server and thereby increases the overall degree of parallelism, 2) a lazy taskbinding technique that reduces interactions between grid servers and the task dispatcher, and 3) dynamic improvements that optimize the collecting of results and the work-load balancing. We report experimental results that demonstrate the improvements due to our optimizations on various testbeds, including a heterogeneous grid-like environment.
We present in this paper a new lock-based resource sharing protocol PWLP (Preemptable Waiting Locking Protocol) for embedded multi-core processors. It is based on the busy-wait model and works with non-preemptive critical sections while task may be preempted by tasks with a higher priority when waiting for resources. Our protocol can be applied in partitioned as well as global scheduling scenarios, in which task-fix, job-fix or dynamically assigned priorities may be used. Furthermore, the PWLP permits nested requests to shared resources. Finally, we present a case study based on event-based simulations in which the FMLP (Flexible Multiprocessor Locking Protocol) and the proposed PWLP are compared.
With multi-core controllers entering the area of automotive control ECUs, strategies for parallelizing the control- algorithms come into focus. This paper deals with a special part of automotive powertrain software, called state transitions. Since dependencies between runnables executed there are weak, the transitions provide a good basis for parallelization. We present a strategy of how to distribute efficiently the execution of runnables to different cores while taking care of inner and outer dependencies. The strategy is accompanied by two case studies demonstrating the performance of the concept. The first one is carried out to find the most efficient strategies of parallelize state transitions based on randomly generated, simulated state transitions. In the second one, the developed partitioning strategies are applied to a real software project for an automotive powertrain system.
Global scheduling algorithms are very promising for application in embedded real-time systems using multi-core controllers. In this paper we want to make a first step forward to apply such scheduling methods on real existing systems. Especially a new resource model is necessary to avoid deadlocks, as this goal can not be achieved by using the standard OSEK Priority Ceiling Protocol when shared global resources are in use. We also introduce the new metric mean Normalized Blocking Time in order to be able to compare locking mechanisms according to the timing effects of their blocking behavior. Finally we give a simulative application example of the new metric by the use of two different kinds of semaphore models and an example task set typical for existing embedded real-time systems in the automotive powertrain environment.
Background:
Fitbit and Garmin motion tracker devices are highly used in research. The validity and reliability of these devices is proven for healthy adults between 18 and 64.
Objectives:
Comparing data output of two devices.
Methods:
Observational case study on a test track and in the domestic environment of a 80- year-old female multimorbide geriatric patient.
Results:
High significant correlation of the devices on the test track [r=.776, p≤.001, Bca-CI-95% (.618;.874), N=33], but significant different in the domestic environment over time (z=4.840, p≤.001).
Conclusion:
The dominant/non-dominant body side and further sources of error may play a role in monitoring steps with these devices.
During a student project a stratosphere balloon was developed and launched. This project included the definition of the balloon parameters and the scientific instruments for performing atmospheric measurements, the development of all mechanical and electronic parts, the administration of the project as well as the management tasks related to the balloon launch. The main challenge for the students was the high complexity of the project due to tasks involving many different knowledge domains and the long project duration from the initial definition to the launch.
Introduction: Improving energy efficiency and reducing energy wastage is an important topic of our time. But it is quite difficult to figure out how much of our total electricity bill can be mapped to which device or at what time the device used it. We believe energy efficiency of normal households can be improved, if this kind of transparency would be available. In this article, we present a system for energy measurement at mains sockets to gain a transparent view of energy consumption for each device in a household. It consists of several smart energy measuring devices (SEMDs) that use a low-power radio protocol to dynamically build and connect to a radio network to transfer power usage date to a server. At the server, the data is stored and can be accessed via web interface.
Results: Our primary goal was to build a back-end system for an energy metering platform with very low energy consumption. This platform can provide data for a variety of services that enables users (the consumers) to understand and improve their energy consumption behavior and increase overall energy efficiency of their households.
We experimentally analyze the complete photon number statistics of parametric down-conversion and ascertain the influence of multimode effects. Our results clearly reveal a difference between single-mode theoretical description and the measured distributions. Further investigations assure the applicability of loss-tolerant photon number reconstruction and prove strict photon number correlation between signal and idler modes.
Information systems backsourcing describes the transferof previously outsourced activities, assets, or personnel back to the originating companyto regain ownership and control. While there is much research on information systems outsourcing, the topic of backsourcing information systemsis still an emerging research area. Therefore, ourpaper aims to explore and synthesizethe existing literature on information systemsbacksourcing, since there is no exhaustiveliteraturereview of the state of the research to our knowledge available yet. In this paper, wecreate a framework to structure the existing research along the overallbacksourcing process. We identifydifferent motivators, such asexpectation gaps, orinternal and external organizational changes, leading towards a backsourcing decision, and factors positively or negatively influencing this decision. Additionally, we derive implementation success factors based on the existing literature to guide companies through the backsourcing process.We also differentiate the term backsourcing from related, sometimes synonymously used terms, by emphasizing the change of ownership back to the company of originas the main criterion.Additionally, we discuss opportunities for future research in the field of information systems backsourcing.
To prepare their IT landscape for future business challenges, companies are changing their IT sourcing arrangements by using selective sourcing approaches as well as multi-sourcing with more but smaller sourcing contracts. Companies therefore have to reconsider and re-evaluate their IT sourcing setup more frequently. Collecting data from 251 global experts, we empirically tested the effect of service quality, relationship quality, and switching costs on IT sourcing decisions using partial least squares (PLS) analysis. Drawing on previously conducted expert interviews, our model extends previous studies and introduces a decision maker’s sourcing preferences as a not yet examined moderator on IT sourcing decisions. This allows us to investigate the influence of the decision maker’s beliefs on the decision process. We were able to confirm the negative effect of switching costs on a decision in favor of backsourcing, however we could not find significant support for the remaining hypotheses. We further discuss potential reasons for our findings and suggest future research opportunities based on our contribution.
There is a common understanding amongst academics that information systems (IS) research sometimes has limited relevance for practitioners. This can be explained by the fact that research lags behind the fast moving IS environment, has limited practical applicability and is hard to access. We have focused on the research area of IS backsourcing, and analyzed practitioner literature to increase our understanding of topics of interest for practitioners, to determine a potential gap between academic and practitioner literature, and to identify future research directions in this field. We observed that most publications are either news or background articles, focusing on describing backsourcing cases. Additionally, we identified four recurring themes, namely reasons for backsourcing, presentation of survey results, discussion of industry trends, and backsourcing success stories. The main reasons identified to trigger backsourcing decisions are cost savings, quality improvements, and increasing control and flexibility. By comparing our findings with academic literature on IS backsourcing, we conclude that generally both literature types cover similar topics. However, researchers have a more formulative or interpretive focus than the often descriptive practitioner literature. Academic literature also examines a broader range of topics, while practitioner literature has a narrower focus. Additionally, we observe one difference regarding applied terminology: while researchers employ the term backsourcing, practitioners mostly use back in-house or insourcing. Our paper contributes to facilitating the exchange between academics and practitioners, presents topics to consider when aiming to increase practical relevance and provides researchers with concrete directions for future research within the field of IS backsourcing.
Modeling, identification and control of an antagonistically actuated joint for telerobotic systems
(2015)
Within this paper a modeling, identification and control technique for an antagonistically actuated joint consisting of two pneumatically actuated muscles is presented. The antagonistically actuated joint acts as a test bench for control architectures which are going to be used to control an exoskeleton within a telerobotic system. A static and dynamic model of the muscle and the joint is derived and the parameters of the models are identified using a least-squares algorithm. The control architecture, consisting of a inner pressure and an outer position controller is presented. The pressure controller is evaluated using switching valves compared against proportional valves.
AUTOSAR spezifiziert ein statisches Echtzeitbetriebssystem (AUTOSAR-OS), welches auf die Bedürfnisse der Automobilindustrie zugeschnitten ist. Dabei stellt die AUTOSAR-OS-Spezifikation im Wesentlichen eine Weiterentwicklung von OSEK-OS dar. Die Unterschiede zu einem OSEK-OS sind im Wesentlichen eine erweiterte Programmierschnittstelle für Counter, Schedule Tables zur Abbildung komplexer zeitgesteuerter Abläufe und Stack Monitoring. Der Scheduler arbeitet prioritätsbasiert und unterstützt unterbrechbare und nicht unterbrechbare Tasks. Eine Ressourcenverwaltung mit Priority Ceiling Protokoll verhindert Deadlocks und Prioritäteninversion. Tasks sind entweder als "Basic Tasks" oder "Extended Tasks" konfiguriert. Nach einer kurzen Vorstellung der OS-Spezifikation wird eine darauf basierte Architektur für das Betriebssystem hergeleitet. Die konkrete Implementierung ist aufgeteilt in einen konfigurationsabhängigen Teil und einen statischen Quellcodeteil. Der konfigurationsabhängige Teil wird von einem Quellcodegenerator erzeugt. Der statische Teil wird in einer Schichtenarchitektur erstellt, um eine Trennung zwischen plattformabhängigem und plattformunabhängigem Quellcode zu erreichen, da die hierfür vorgesehene AUTOSAR-Methode nicht ausreicht. Die Portierung des OS auf eine neue Hardwareplattform zeigt, dass nur die plattormabhängige Quellcodeschicht des Betriebssystems geändert werden muss. Bei der Validierung des Betriebssystems wurde neben einem funktionellen Test auch die Performance für charakteristische Aufgaben, wie beispielsweise einem Kontextwechsel, gemessen. Diese Werte werden mit einem kommerziellen, stark optimierten AUTOSAR-OS verglichen.
Quantum computing promises to overcome computational limitations with better and faster solutions for optimization, simulation, and machine learning problems. Europe and Germany are in the process of successfully establishing research and funding programs with the objective to dvance the technology’s ecosystem and industrialization, thereby ensuring digital sovereignty, security, and competitiveness. Such an ecosystem comprises hardware/software solution providers, system integrators, and users from research institutions, start-ups, and industry. The vision of the Quantum Technology and Application Consortium (QUTAC) is to establish and advance the quantum computing ecosystem, supporting the ambitious goals of the German government and various research programs. QUTAC is comprised of ten members representing different industries, in particular automotive manufacturing, chemical and pharmaceutical production, insurance, and technology. In this paper, we survey the current state of quantum computing in these sectors as well as the aerospace industry and identify the contributions of QUTAC to the ecosystem. We propose an application-centric approach for the industrialization of the technology based on proven business impact. This paper identifies 24 different use cases. By formalizing high-value use cases into well-described reference problems and benchmarks, we will guide technological progress and eventually commercialization. Our results will be beneficial to all ecosystem participants, including suppliers, system integrators, software developers, users, policymakers, funding program managers, and investors.
A common way to ray trace subdivision surfaces is by constructing and traversing spatial hierarchies on top of tessellated input primitives. Unfortunately, tessellating surfaces requires a substantial amount of memory storage, and involves significant construction and memory I/O costs. In this paper, we propose a lazy-build caching scheme to efficiently handle these problems while also exploiting the capabilities of today's many-core architectures. To this end, we lazily tessellate patches only when necessary, and utilize adaptive subdivision to efficiently evaluate the underlying surface representation. The core idea of our approach is a shared lazy evaluation cache, which triggers and maintains the surface tessellation. We combine our caching scheme with SIMD-optimized subdivision primitive evaluation and fast hierarchy construction over the tessellated surface. This allows us to achieve high ray tracing performance in complex scenes, outperforming the state of the art while requiring only a fraction of the memory. In addition, our method stays within a fixed memory budget regardless of the tessellation level, which is essential for many applications such as movie production rendering. Beyond the results of this paper, we have integrated our method into Embree, an open source ray tracing framework, thus making interactive ray tracing of subdivision surfaces publicly available.
In this paper we present our first steps in defining the type, scope and relevance of writing in higher education of software engineering. We aim to identify lacks of scientific research and raise a new and necessary research interest to push research in this area. First we clarify the relevance of writing in higher education in general. In a second step we highlight the relevance of writing in the domain of software engineering in particular. Soft skills to be taught to students of engineering professions and especially to software engineering students are highly discussed. We discuss the skill of writing from a theoretical view as well as reasons for the high relevance of this skill for future engineers. An obligation of teaching writing in the higher education is formulated.
Universities are faced with a rising number of dropouts in recent years. This is largely due to students' limited capability of finding individual learning paths through various course materials. However, a possible solution to this problem is the introduction of adaptive learning management systems, which recommend tailored learning paths to students – based on their individual learning styles. For the classification of learning styles, the most commonly used methods are questionnaires and learning analytics. Nevertheless, both methods are prone to errors: questionnaires may give superficial answers due to lack of time or motivation, while learning analytics do not reflect offline learning behavior. This paper proposes an alternative approach to classify students' learning styles by integrating eye tracking in combination with Machine Learning (ML) algorithms.
Incorporating eye tracking technology into the classification process eliminates the potential problems arising from questionnaires or learning analytics by providing a more objective and detailed analysis of the subject's behavior. Moreover, this approach allows for a deeper understanding of subconscious processes and provides valuable insights into the individualized learning preferences of students.
In order to demonstrate this approach, an eye tracking study is conducted with 117 participants using the Tobii Pro Fusion. Using qualitative and quantitative analyses, certain patterns in the subjects' gaze behavior are assigned to their learning styles given by the validated Index of Learning Styles (ILS) questionnaire.
In short, this paper presents an innovative solution to the challenges associated with classifying students' learning styles. By combining eye tracking data with ML algorithms, an accurate and insightful understanding of students' individual learning paths can be achieved, ultimately leading to improved educational outcomes and reduced dropout rates.
The dropout rate at universities has been very high for years. Thereby, the inexperience and lack of knowledge of students in dealing with individual learning paths in various courses of study plays a decisive role. Adaptive learning management systems are suitable countermeasures, in which learners’ learning styles are classified using questionnaires or computationally intensive algorithms before a learning path is suggested accordingly. In this paper, a study design for student learning style classification using eye tracking is presented. Furthermore, qualitative and quantitative analyses clarify certain relationships between students’ eye movements and learning styles. With the help of classification based on eye tracking, the filling out of questionnaires or the integration of computationally or cost-intensive algorithms can be made redundant in the future.
An immense diversity in bottle types requires high accuracy during sorting for recycling purposes by breweries. This extremely complex and time-consuming procedure can result in enormous additional costs for them. This paper presents transfer learning-based algorithms for classifying beer bottle brands using camera images, applicable in individual sorting solutions for different use cases. The problem is tackled using customised EfficientNet, InceptionResNet and VGG models along with an augmented dataset. In addition, a detailed analysis of different model and parameter combinations is performed, enabling tailor-made technologies for specific conditions and resource limitations. In accompanying validations and subsequent tests, a test accuracy of 100% in the recognition of beer brands could be achieved, proving the proposed method fully contributes to the solution of the problem.
Adaptive Moment Estimation (Adam) is a very popular training algorithm for deep neural networks, implemented in many machine learning frameworks. To the best of the authors knowledge no complete convergence analysis exists for Adam. The contribution of this paper is a method for the local convergence analysis in batch mode for a deterministic fixed training set, which gives necessary conditions for the hyperparameters of the Adam algorithm. Due to the local nature of the arguments the objective function can be non-convex but must be at least twice continuously differentiable.
The main issues in many image processing applications are
object recognition and detection of objects, which answers the questions whether an object is present and if it is present, where it is located. Popular object detection algorithms like YOLO use a regression formulation for the whole problem, especially for the bounding box parameters. In production industry the setting usually is different: One usually knows the object type and rather wants to know with high precision where the object is. We study a prototype application in this area where we identify the rotation of an object in a plane. To solve this problem use a regression approach with a CNN architecture as a function approximator. We compare our results to standard image processing algorithms, which do not use neural networks, and present quantitative results on the accuracy.
CNNs seem at least competitive to classical image processing.
One of the most popular training algorithms for deep neural networks is the Adaptive Moment Estimation (Adam) introduced by Kingma and Ba. Despite its success in many applications there is no satisfactory convergence analysis: only local convergence can be shown for batch mode under some restrictions on the hyperparameters, counterexamples exist for incremental mode. Recent results show that for simple quadratic objective functions limit cycles of period 2 exist in batch mode, but only for atypical hyperparameters, and only for the algorithm without bias correction. We extend the convergence analysis to all choices of the hyperparameters for quadratic functions. This finally answers the question of convergence for Adam in batch mode to the negative. We analyze the stability of these limit cycles and relate our analysis to other results where approximate convergence was shown, but under the additional assumption of bounded gradients which does not apply to quadratic functions. The investigation heavily relies on the use of computer algebra due to the complexity of the equations.
Reasonable integration of gesture-based automotive HMI-functionality offers potential safety benefits by reducing driver distractions and glance times to operate tertiary in-car devices. Stereo camera systems are a well investigated choice to perform the task of generating depth data for spatial gesture recognition. This paper describes the functionality of our stereo vision software, which is intended for application in a target system based on CMOS wafer-level cameras. The retrieved point cloud data was passed to a gesture-based sample application.
This article provides a mathematical analysis of singular (nonsmooth) artifacts added to reconstructions by filtered backprojection (FBP) type algorithms for X-ray computed tomography (CT) with arbitrary incomplete data. We prove that these singular artifacts arise from points at the boundary of the data set. Our results show that, depending on the geometry of this boundary, two types of artifacts can arise: object-dependent and object-independent artifacts. Object-dependent artifacts are generated by singularities of the object being scanned, and these artifacts can extend along lines. They generalize the streak artifacts observed in limited-angle tomography. Object-independent artifacts, on the other hand, are essentially independent of the object and take one of two forms: streaks on lines if the boundary of the data set is not smooth at a point and curved artifacts if the boundary is smooth locally. We prove that these streak and curve artifacts are the only singular artifacts that can occur for FBP in the continuous case. In addition to the geometric description of artifacts, the article provides characterizations of their strength in Sobolev scale in certain cases. The results of this article apply to the well-known incomplete data problems, including limited-angle and regionof-interest tomography, as well as to unconventional X-ray CT imaging setups that arise in new practical applications. Reconstructions from simulated and real data are analyzed to illustrate our theorems, including the reconstruction that motivated this work a synchrotron data set in which artifacts appear on lines that have no relation to the object.
This article provides a mathematical classification of artifacts from arbitrary incom-plete X-ray tomography data when using the classical filtered backprojection algorithm. Usingmicrolocal analysis, we prove that all artifacts arise from points at the boundary of the data set.Our results show that, depending on the geometry of the data set boundary, two types of artifactscan arise: object-dependent and object-independent artifacts. The object-dependent artifacts aregenerated by singularities of the object being scanned and these artifacts can extend all along lines.This is a generalization of the streak artifacts observed in limited angle CT. The article also char-acterizes two new phenomena: the object-independent artifacts are caused only by the geometryof the data set boundary; they occur along lines if the boundary of the data set is not smooth andalong curves if the boundary of the data set is smooth. In addition to the geometric descriptionof artifacts, the article also provides characterizations of their strength in Sobolev scale in certaincases. Moreover, numerical reconstructions from simulated and real data are presented illustratingour theorems.This work is motivated by a reconstruction we present from a synchrotron data set in whichartifacts along lines appeared that were independent of the object.The results of this article apply to a wide range of well-known incomplete data problems, in-cluding limited angle CT and region of interest tomography, as well as to unconventional x-ray CTimaging setups. Some of those problems are explicitly addressed in this article, theoretically and numerically.
We study samples with full and partial occlusion causing streak artifacts, and propose two mod-ifications of filtered backprojection for artifact removal. Data is obtained by the SPring-8 synchrotron using a monochromatic parallel-beam scan [1]. Thresholding in the sinogram segments the metal, resulting in edges on which we apply 1) a smooth transition, or 2) a Dirichlet boundary condition.
Ascertaining the feasibility of independent falsification or repetition of published results is vital to the scientific process, and replication or reproduction experiments are routinely performed in many disciplines. Unfortunately, such studies are only scarcely available in database research, with few papers dedicated to re-evaluating published results. In this paper, we conduct a case study on replicating and reproducing a study on schema evolution in embedded databases. We can exactly repeat the outcome for one out of four database applications studied, and come close in two further cases. By reporting results, efforts, and obstacles encountered, we hope to increase appreciation for the substantial efforts required to ensure reproducibility. By discussing minutiae details required to ascertain reproducible work, we argue that such important, but often ignored aspects of scientific work should receive more credit in the evaluation of future research.
Wie kann die funktionale Sicherheit in Fahrzeugen zukunftssicher und effektiv gewährleistet werden? Und wie kann dies speziell in elektrifizierten Antrieben gelingen? Mit dieser Aufgabenstellung haben sich AVL in Kooperation mit dem LaS³ und der Universität der Bundeswehr München in einem Forschungsprojekt beschäftigt. Die Antwort lautet: Die automatischen Speichertests in Zusammenspiel mit der Programmfluss-Überwachung und redundanter Hardware können besonders effektiv durch die „Codierte Verarbeitung“ ersetzt werden. Denn hier wird die Diversität in Software erhöht, um die aufwendigere und kostspielige Redundanz von Hardware zu reduzieren.
Safety has the highest priority because it helps contribute to customer confidence and thereby ensures further growth of the new markets, like electromobility. Therefore in series production redundant hardware concepts like dual core microcontrollers running in lock-step-mode are used to reach for example ASIL D safety requirements given from the ISO 26262. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software, e.g. by specific coding of data and instructions. A system with two coded processing channels is considered. Both channels are active. When one channel fails, the service can be continued with the other channel. It is imaginable that the two channels with implemented coded processing are running with time redundancy on a single core or on a multi core system where for example different ASIL levels are partitioned on different cores. In this paper a redundancy concept based on coded processing will be taken into account. The improvement of the Mean Time To Failure by safeguarding the system with coded processing will be computed for fail-safe as well as for fail-operational systems. The use of the coded processing approach in safeguarding failsafe systems is proved.
Safety of embedded systems has the highest priority because it helps contribute to customer confidence and thereby ensures growth of the new markets, like electromobility. In series production fail-safe systems as well as fault-tolerant systems are realized with redundant hardware concepts like dual core microcontrollers running in lock-step-mode to reach highest safety requirements given by standards, like ISO 26262 or IEC 61508. In contrast to the hardware redundancy approach, there are also approaches available with information-, time-and/or software-redundancy since several years. One of them is known as coded processing or AN-codes. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software. But the breakthrough of coded processing never took place. One reason for this seem to be the myths which are widely propagated on this subject and the hereby associated uncertainties. In this paper some myths are busted, like the usage of prime numbers as transformation factor A, the myth that greater transformation factors are better or the myth about the residual error probability defined as 1/A. Some of them have been propagated since 1989. The aim of this paper is to provide more clarity and understanding for this technique, perhaps to pave the way for further functional safety concepts based on coded processing approaches.
The safety of electric vehicles has the highest priority because it helps contribute to customer confidence and thereby ensures further growth of the electromobility market. Therefore in series production redundant hardware concepts like dual core microcontrollers running in lock-step-mode are used to reach ASIL D safety requirements given from the ISO 26262. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software, e.g. by specific coding of data and instructions. A system with two coded processing channels is considered. One channel is active and one is in cold standby. When the active channel fails, the service is switched from the active channel to the standby channel. It is imaginable that the two channels with implemented coded processing are running with time redundancy on a single core or on a multi core system where for example different ASIL levels are partitioned on different cores. In this paper a redundant concept based on coded processing and software rejuvenation will be taken into account.
Capability of single hardware channel for automotive safety applications according to ISO 26262
(2012)
Over the last years, configurational research has become increasingly popular in the Information Systems (IS) discipline. Researchers value configurational methods like Qualitative Comparative Analysis (QCA) as their application contributes to a better understanding of complex phenomena. QCA helps to uncover interrelations of conditions that lead to an outcome, building on the principles of equifinality, conjunctural causation, and asymmetry. More recently, IS researchers have started to analyze qualitative data, like case study data, with QCA. However, there is a lack of methodological guidance on how to calibrate qualitative data into set membership values for QCA. Therefore, this paper structures methodological steps and the associated options to calibrate qualitative data from an interdisciplinary perspective and critically reviews the observed methodological choices in IS research. This paper also gives recommendations for calibrating qualitative data to support informed methodological choices for future research.
In the era of digitalization, companies review how to build competitive advantage drawing on their capabilities. In this context, the paper revisits the concept of IT capabilities. We summarize the “canonical” body of literature to characterize how leading scholars conceptualized and differentiated IT capabilities and then we show how new types of IT capabilities (for example, rising from SMACIT technologies) relate to this. We find that the resource-based view is still leading in how to conceptualize IT capabilities. However, an alternative perspective on IT assets is emerging, looking at it from the angle of digital technologies as stacks. Digital technologies form the foundation for a new technology-driven perspective on IT capabilities. This new view complements the established differentiation of IT capabilities considering IT infrastructure flexibility, IT management, and IT personnel capability. Both perspectives describe the explorative and exploitative nature of IT capabilities. This paper helps scholars and practitioners to clearly distinguish different perspectives of IT capabilities on how to build a competitive advantage from IT today.
This paper assesses the relation between personality, demographics, and learning style. Hence, data is collected from 200 participants using 1) the BFI-10 to obtain the participant’s expression of personality traits according to the five-factor model, 2) the ILS to determine the participant’s learning style according to Felder and Silverman, and 3) a demographic questionnaire. From the obtained data, we train and evaluate a Bayesian network. Using Bayesian statistics, we show that age and gender slightly influence personality and that demographics as well as personality have at least a minor effect on learning styles. We also discuss the limitations and future work of the presented approach.
This paper presents an overview on qualification and certification of tools used in the phases of the safety lifecycle for safety-critical applications, either for development or for verification and validation. Software development tools are widely used in the development of safety-critical software systems. More verification and validation procedures will be automated by software tools to reduce time consuming manual testing. The impact of software tools on functional safety is discussed. Based on normative regulations like IEC 61508 and ISO DIS 26262 different approaches for tool qualification and certification are presented.
This paper presents an overview on qualification and certification of tools used in the phases of the safety lifecycle for safety-critical applications, either for development or for verification and validation. Software development tools are widely used in the development of safety-critical software systems. More verification and validation procedures will be automated by software tools to reduce time consuming manual testing. The impact of software tools on functional safety is discussed. Based on normative regulations like IEC 61508 and ISO DIS 26262 different approaches for tool qualification and certification are presented.
In this paper compliant multistable tensegrity structures with discrete variable stiffness are investigated. The different stiffness states result from the different prestress states of these structures corresponding to the equilibrium configurations. Three planar tensegrity mechanisms with two stable equilibrium configurations are considered exemplarily. The overall stiffness of these structures is characterized by investigations with regard to their geometric nonlinear static behavior. Dynamical analyses show the possibility of the change between the equilibrium configurations and enable the derivation of suitable actuation strategies.
The use of compliant tensegrity structures in robotic applications offers several advantageous properties. In this work the dynamic behaviour of a planar tensegrity structure with multiple static equilibrium configurations is analysed, with respect to its further use in a two-finger-gripper application. In this application, two equilibrium configurations of the structure correspond to the opened and closed states of the gripper. The transition between these equilibrium configurations, caused by a proper selected actuation method, is essentially dependent on the actuation parameters and on the system parameters. To study the behaviour of the dynamic system and possible actuation methods, the nonlinear equations of motion are derived and transient dynamic analyses are performed. The movement behaviour is analysed in relation to the prestress of the structure and actuation parameters.
Big Data at Work
(2021)
Does productivity decline with age? Does population aging harm economic growth? We exploit process-generated data from a large and typical service-sector company. We find no decline in average productivity in the age range of 20-60. This result is precisely measured. Our innovative identification strategy corrects for sample selection, endogeneity of age composition and age-cohort confounding. Our big data are essential to extract the signal from the noise that has marred many previous studies. While average productivity stays flat, we find variation according to task complexity. Productivity increases with age in teams with more demanding tasks and decreases in routine tasks.
The automotive industry currently faces several challenges, including a growing complexity in system architecture. At the same time, the task load as well as the needs for performance increase. To address this problem, the A3Fa research project evaluates scalable distributed concepts for future vehicle system architectures. These can be seen as comparable to cluster-computing systems, which are applied in high-performance or high-availability use-cases. Methods used in such scenarios will also be important features in future vehicle architectures such as horizontal application scalability, application load balancing and reallocation, as well as functionality upgrades triggered by the user.
This paper focuses on concepts and methods for the reliability of applications and hardware in future in-vehicle distributed system architectures. It is argued that future automotive computing systems will evolve towards enterprise IT systems similar to today’s data centers. Furthermore, it is stated these vehicle systems can benefit greatly from IT systems.
In particular, the safety against failure of functions and hardware in such systems is discussed. For this purpose, various of such mechanisms used in information technology are investigated. A layer-based classification is proposed, representing the different fail-safe levels.
We consider the task of building Big Data software systems, offered as software-as-a-service. These applications are commonly backed by NoSQL data stores that address the proverbial Vs of Big Data processing: NoSQL data stores can handle large volumes of data and many systems do not enforce a global schema, to account for structural variety in data. Thus, software engineers can design the data model on the go, a flexibility that is particularly crucial in agile software development. However, NoSQL data stores commonly do not yet account for the veracity of changes when it comes to changes in the structure of persisted data. Yet this is an inevitable consequence of agile software development. In most NoSQL-based application stacks, schema evolution is completely handled within the application code, usually involving object mapper libraries. Yet simple code refactorings, such as renaming a class attribute at the source code level, can cause data loss or runtime errors once the application has been deployed to production. We address this pain point by contributing type checking rules that we have implemented within an IDE plug in. Our plug in ControVol statically type checks the object mapper class declarations against the code release history. ControVol is thus capable of detecting common yet risky cases of mismatched data and schema, and can even suggest automatic fixes.
In building software-as-a-service applications, a flexible development environment is key to shipping early and often. Therefore, schema-flexible data stores are becoming more and more popular. They can store data with heterogeneous structure, allowing for new releases to be pushed frequently, without having to migrate legacy data first. However, the current application code must continue to work with any legacy data that has already been persisted in production. To let legacy data structurally "catch up" with the latest application code, developers commonly employ object mapper libraries with life-cycle annotations. Yet when used without caution, they can cause runtime errors and even data loss. We present ControVol, an IDE plugin that detects evolutionary changes to the application code that are incompatible with legacy data. ControVol warns developers already at development time, and even suggests automatic fixes for lazily migrating legacy data when it is loaded into the application. Thus, ControVol ensures that the structure of legacy data can catch up with the structure expected by the latest software release.
The use of mechanically prestressed compliant structures in soft robotics is a recently discussed topic. Tensegrity structures, consisting of a set of rigid disconnected compressed members connected to a continuous net of prestressed elastic tensioned members build one specific class of these structures. Robots based on these structures have manifold shape changing abilities and can adapt their mechanical properties reversibly by changing of their prestress state according to specific tasks.
In the paper selected aspects on the potential use of elastomer materials in these structures are discussed with the help of theoretical analysis. Therefore, a selected basic tensegrity structure with elastomer members is investigated focusing on the stiffness and shape changing ability in dependence of the nonlinear hyperelastic behavior of the used elastomer materials. The considered structure is compared with a conventional tensegrity structure with linear elastic tensioned members. Finally, selected criterions for the advantageous use of elastomer materials in compliant tensegrity robots are discussed.
Rodents use their mystacial vibrissae, e.g., to recognize the shape or determine the surface texture of an object. The vibrissal sensory system consists of two components: the hair shaft and the follicle-sinus complex (FSC). Both components affect the collection of information, but the impacts of the different properties are not completely clear. Borrowing the natural example, the goal is to design a powerful artificial sensor. The influence of a continuous visco-elastic support is analyzed for an artificial sensor following hypotheses about the FSC. Starting with a theoretical treatment of this scenario, the vibrissa is modeled as an Euler-Bernoulli bending beam with a partially continuous visco-elastic support. The numerical simulations are validated by experiments. Using a steel strip as a technical vibrissa and a magneto-sensitive elastomer (MSE) as representation of the artificial continuous visco-elastic support, FSC respectively, the first resonance frequency is determined.
Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered.
Measurement is the only part of a general quantum system that has yet to be characterised experimentally in a complete manner. Detector tomography provides a procedure for doing just this; an arbitrary measurement device can be fully characterised, and thus calibrated, in a systematic way without access to its components or its design. The result is a reconstructed POVM containing the measurement operators associated with each measurement outcome. We consider two detectors, a single-photon detector and a photon-number counter, and propose an easily realised experimental apparatus to perform detector tomography on them. We also present a method of visualising the resulting measurement operators.
This paper introduces a novel chaotic flower pollination algorithm (CFPA) to solve a tardiness-constrained flow-shop scheduling problem with simultaneously loaded stations. This industrial manufacturing problem is modeled from a filter basket production line in Germany and has been generally solved using standard deterministic algorithms. This research develops a metaheuristic approach based on the highly efficient flower pollination algorithm coupled with different chaos maps for stochasticity. The objective function targeted is the tardiness constraint of the due dates. Fifteen different experiments with thirty scenarios are generated to mimic industrial conditions. The results are compared with the genetic algorithm and with the four standard benchmark priority rule-based deterministic algorithms of First In First Out, Raghu and Rajendran, Shortest Processing Time and Slack. From the obtained results and analysis of the relative difference, percentage relative difference and t tests, CFPA was found to be significantly better performing than the deterministic heuristics and the GA algorithm.
Semantic alignment of application software components’ ontologies represents a great interest in vehicle application domains that manipulate heterogeneous overlapping knowledge application frameworks. In the past few years, with the growth in the novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication) and many others, automotive application software component models are becoming increasingly collaborative with other qualified cross-enterprise industrial partners to accomplish these complex service requirements. The most daunting impediment to this cross-enterprise collaboration is semantic interoperability. For efficient services collaboration through cross-enterprise semantic interoperability between the vehicle application frameworks’ software components, aligning the interface ontologies of these components by identifying the depth of semantic alignment relationships between the concepts of the interface ontologies is the major focus of this paper. In contrast to several existing ontology structural metrics, this work defines, evaluates and validates ontology metrics to measure the depth of semantic alignment between the vehicle domain software component frameworks’ interface ontological models. To emphasize the substantial role of semantic alignment of software component frameworks’ interface ontologies in semantic interoperability, a typical vehicle domain case study involving vehicle applications is considered for demonstration.
In the last few years, the collaboration of services between the service-oriented, cross-enterprise vehicle application frameworks has gradually increased to generate novel and more complicated vehicle services for the automotive industry. In these service collaboration scenarios, where heterogeneous application frameworks participate to realize complex vehicle services, a source of discord that emerges is that the service providers must always check, before the service deployment, whether the clients or service consumers on the other side of the communication link are compatible with a given service's API (Application Program Interface). While using standardized templates like ontologies for API's semantic specifications are crucial for a service discovery and semantic interoperability, nevertheless, accessing these service APIs' semantic data using a standardized and understandable syntactical specification template is also equally substantial to ease services interoperability. In fact, such complex service collaboration scenarios motivate this research work which proposes a design approach towards standardized, domain-specific, platform-agnostic semantic and syntactic specification of vehicle services API models. This paper also uses a typical vehicle domain case study to illustrate the design approach and a reference mapping between the platform-agnostic semantics specifications of a vehicle service API ontological model and its corresponding language-neutral, syntactic representation using the OpenAPI standard.
Over the past few years, ontology merging, and ontology semantic alignment has gained significant interest as research topics in automotive application domain for finding solutions to semantic data heterogeneity. To accomplish the complex and novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication), etc. the automotive applications involve collaborations of several platform-specific data from heterogeneous enterprises component frameworks and consequently there has been increase in data interoperability issues. At the application component level, data interoperability relies on the semantic alignment or mapping between the various component framework interfaces data models represented as XML schemas (XSD). With the XML schemas being the preferred standard for the interface description exchange between most of the automotive application domain components, however, the data interoperability between the semantically equivalent but structurally different data constructs of multiple heterogeneous XSDs stands as a challenge in the absence of an ontology-based approach. To confront this crucial requirement for data interoperability and to increase in effect the reuse of existing components through their interfaces, we propose an approach to semantically map the various component framework interface data models when expressed as ontology schemas, based on the exploration of semantic synergies. The transformation between XSD and RDF (Resource Description Framework) schema representations and the use of queries over the ontology schemas for semantic mapping are demonstrated including a real-world case study.
Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection.
Im Rahmen eines Forschungsprojektes soll eine Plattform für die Vermittlung von Telekonsilen und die Bereitstellung einer Konsilakte an die Telematik-Infrastruktur (TI) angeschlossen werden. Um sowohl eine bestmögliche Skalierbarkeit als auch eine optimale Integrierbarkeit in bestehende Systeme und Anwendungen zu erreichen, wurde HL7 FHIR als syntaktischer Standard für das Reha-Konsil festgelegt. Dieses Dokument liefert einen systematischen Überblick über die notwendigen Schritte und Voraussetzungen, um diesen Anschluss zu bewerkstelligen.
Proportionate fair (Pfair) scheduling, which allows task migration at runtime and assigns each task processing time with regard to its weight, is one of the most efficient group of SMP multiprocessor scheduling algorithms known up to now. Drawbacks are tight requirements to the task system, namely the restriction to periodic task systems with synchronized task activation, quantized task execution time, and implicit task deadline. Most likely, a typical embedded real-time system does not fulfill these requirements. In this paper we address violations of these requirements. For heterogeneous task systems, we define the multiple time base (MTB) task system, which is a less pessimistic model than sporadic task systems and is used for automotive systems. We apply the concept of Pfair scheduling to MTB task systems, called partly proportionate fair (Partly-Pfair) scheduling. The restrictions on MTB task systems required for Partly-Pfair ness are weaker than restrictions on periodic task systems required for Pfair ness. In a simulation based study we examined the performance of Partly-Pfair-PD and found it capable to schedule feasible MTB task sets causing a load of up to 100% of the system capacity.
Partitionierungs-Scheduling von Automotive Restricted Tasksystemen auf Multiprozessorplattformen
(2009)
Der Embedded Markt stellt sich auf eine neue Herausforderung ein: denUmstieg von Singlecore- auf Multicore-Prozessorsysteme. Dabei soll dieUmsetzung der Norm ISO 26262 die Funktionale Sicherheit der elektri-schen und elektronischen Systeme im Kraftfahrzeug gewährleisten. In diesem Beitrag betrachten die Hochschule Regensburg und die TÜV SüdAutomotive GmbH das Scheduling eines Echtzeitsystems als ein sicher-heitsrelevantes Sub-System.
Partly Proportionate fair (Partly-Pfair) scheduling, which allows task migration at runtime and assigns each task processing time with regard to its weight, makes it possible to build highly efficient embedded multi-core systems. Due to its non-work-conserving behavior, which might leave the CPU idle even when tasks are ready to execute, tasks finish only shortly before their deadlines are reached. Benefits are lower task jitter, but additional workload, e.g. through interrupts, can lead to deadline violations. In this paper we present a work-conserving extension of Partly-Pfair scheduling, called PERfair scheduling and the algorithm P-ERfair-PD2 which applies Pfair modifications used for Partly-Pfair on the concept of ERfairness and PD2 policies. With a simulation based schedulability examination we show for multiple time base (MTB) task sets that P-ERfair- PD2 has the same performance as Partly-Pfair-PD2. Additionally, we show that P-ERfair- PD2 has a much higher robustness against perturbations, and therefore it is well suited for embedded domains, especially for the Automotive domain.
Eingebettete Systeme unterliegen neben den funktionalen Anforderungen besonders nichtfunktionalen Qualitätsanforderungen wie Effizienz, Zuverlässigkeit und Echtzeitfähigkeit. Mit steigendem Bedarf an Rechenkapazität können bisherige Konzepte zur Leistungssteigerung von Singlecore-Systemen jedoch nicht mehr eingesetzt werden - der Umstieg auf Multicore-Systeme wird erforderlich. Im zweiten Teil dieser Arbeit wird ein simulationsbasierter Ansatz zum Vergleich von Multicore-Scheduling-Algorithmen vorgestellt, mit dem Algorithmen für Multicore-Systeme mit voller Migration und dynamischer Task-Priorität untersucht werden. Wir erweitern diesen Ansatz um ein Verfahren zur Untersuchung einer Tasksetmenge mit stochastisch beschriebenen Eigenschaften und vergleichen ihn mit den im Teil 1 beschriebenen Algorithmen BinPacking-EDF und P-ERfair-PD² für eine Gruppe von Automotive Powertrain Systemen.
Eingebettete Systeme unterliegen neben den funktionalen Anforderungen besonders nicht funktionalen Qualitätsanforderungen wie Effizienz, Zuverlässigkeit und Echtzeitfähigkeit. Mit steigendem Bedarf an Rechenkapazität können bisherige Konzepte zur Leistungssteigerung von Singlecore-Systemen jedoch nicht mehr eingesetzt werden – der Umstieg auf Multicore-Systeme wird erforderlich. Im ersten Teil dieser Arbeit werden eine mögliche Prozessorarchitektur für zukünftige Automotive Multicore-Systeme und die Abstraktion der Software für diese Systeme vorgestellt. Nach einer Klassifkation von Multicore-Scheduling-Algorithmen präsentieren wir exemplarisch einen Algorithmus mit statischer Taskallokation und einen Algorithmus mit dynamischer Taskallokation. Bei beiden Algorithmen handelt es sich um eine Überführung theoretisch behandelter Algorithmen auf Automotive Systeme.
Grid applications are increasingly being developed as workflows using well-structured, reusable components. We argue that components with well-defined semantics facilitate an efficient scheduling on the Grid. We have previously developed a user-transparent scheduling approach for Higher-Order Components (HOCs) – parallel implementations of typical programming patterns, accessible and customizable via Web services. Our approach combines three scheduling techniques: using cost functions for reducing communication overhead, reusability of schedules for similar workflows, and the aggregated submission of jobs. We analyze the user-transparent scheduling from four perspectives, namely: the easiness of integration within already existing Grid scheduling systems, the gains for individual users, the resource provider advantages, and the robustness with respect to execution failures. We perform our evaluation using the KOALA Grid scheduler extended to support our user-transparent scheduling, which we run on the DAS-2 system combining over 200 nodes at five sites in the Netherlands. The experimental results show an increase in throughput by more than 100%, a descreasing of the response time by 50%, and a failure reduction by 45% for the considered scenarios
We suggest that parallel software components used for grid computing should be adaptable to application-specific requirements, instead of developing new components from scratch for each particular application. As an example, we take a parallel farm component which is “embarrassingly parallel”, i. e., free of dependencies, and adapt it to the wavefront processing pattern with dependencies that impact its behavior. We describe our approach in the context of Higher-Order Components (HOCs), with the Java-based system Lithium as our implementation framework. The adaptation process relies on HOCs’ mobile code parameters that are shipped over the network of the grid. We describe our implementation of the proposed component adaptation method and report first experimental results for a particular grid application — the alignment of DNA sequence pairs, a popular, time-critical problem in computational molecular biology.
This paper deals with one of the fundamental properties of grid computing – transferring code between grid nodes and executing it remotely on heterogeneous hosts. Contemporary middleware relies for this purpose on Web Services, which makes application programs complicated and low-level and requires much additional expertise from programmers. We compare two mechanisms for grid application programming with regard to their handling of code transfer – the de-facto standard WS-GRAM in Globus and the higher-level approach based on HOCs (Higher-Order Components). We study the advantages and problems of each approach using a real-world application case study – the sequent alignment problem from bioinformatics. Our experiments show the trade-off between reduced development costs and software complexity when HOCs are used and the higher performance of the applications on the grid when using WS-GRAM.
HOC-SA
(2004)
The current efforts on programming grid applications often rely on service-oriented approaches like grid services. This work presents HOC-SA -a service architecture for higher-order components, which provides the programmer with reusable and composable patterns of parallelism and is interoperable with the latest Globus toolkit implementations. We describe our implementation of HOC-SA using OGSA-DAI, a framework for integrating grids with distributed databases. We present a simple example application and report first measurements on our grid testbed.
Interoperating components, implemented in multiple programming languages, are one of the key requirements of grid computing that operates over the borders of individual hardware and software platforms. Modern grid middleware like WSRF facilitates interoperability through service-orientation but it also increases software complexity. We show that Higher-Order Components (HOCs) provide a service-oriented programming abstraction over middleware technology. By offering the pipeline skeleton from the MPI-based eSkel library as a HOC, we show how machine-oriented technologies can be made available via Web Services on grids. We bind a Java-based Web application to the HOC to demonstrate its connectivity: user defined input can be transformed in a highly performant manner by running wavelet computations remotely on parallel machines.
Report Datenmigration
(2010)
Mehrere Terabyte Daten aus einem auf dem Mainframe laufenden COBOL-Programm auf eine SOA-Architektur unter Linux zu migrieren, stellt besondere Anforderungen an die Werkzeuge und Entwickler. Geschickte Kombination vorhandener Tools und effizienter Strategien vermeiden Stillstandszeiten und beschleunigen den Datentransfer.
Computational grids combine computers in the Internet for distributed data processing and are an attractive platform for the data-intensive applications of bioinformatics. We present an extensible genome processing software for the grid and evaluate its performance. Our software was able to discover previously unknown circular permutations (CP) in the ProDom database containing more than 70MB of protein data. A specific feature of our software is its design as a component: the Alignment HOC, a Higher-Order Component that makes use of the latest Globus toolkit as grid middleware. Besides genome data, the Alignment HOC accepts plugin code for processing this data as its input, and contains all the required configuration to run the component on top of Globus, thus, freeing the non-grid-expert user from dealing with grid middleware. Instead of writing data distribution procedures and configuring the middleware appropriately for every new algorithm, Alignment HOC users reuse the existing component and only write application-specific plugins. To maintain plugins persistently in a reusable manner, we built a web-accessible plugin database with a comfortable administration GUI. The flexible component-based implementation makes it easy to study CPs in other databases (e.g. UniProt/Swiss-Prot) or to use an alignment algorithm different than the standard Needleman-Wunsch. For the efficient distribution of workload, we developed a library of group communication operations for HOCs.
Any re-design of a distributed legacy system requires a migration which involves numerous complex data replication and transformation steps. Migration procedures can become quite difficult and time-consuming, especially when the setup (i.e., the employed databases, encodings, formats etc.) of the legacy and the target system fundamentally differ, which is often the case with finance data, grown over decades. We report on experiences from a real-world project: the recent migration of a customer loyalty system from a COBOL-operated mainframe to a modern service-oriented architecture. In this context, we present our easy-to-adopt solution for running most replication steps in a high-performance manner: the QuickApply HPC-software which helps minimizing the replication time, and, thereby, the overall downtime of the migration. Business processes can be kept up and running most of the time, while pre-extracted data already pass a variety of platforms and representations toward the target system. We combine the advantages of traditional migration approaches: transformations, which require the interruption of business processes are performed with static data only, they can be made undone in case of a failure and terminate quickly, due to the use of parallel processing.
While high-level software components simplify the programming of grid applications and Web services increase their interoperability, developing such components and configuring the interconnecting services is a demanding task. In this paper, we consider the combination of Higher-Order Components (HOCs) with the Fractal component model and the ProActive library.
HOCs are parallel programming components, made accessible on the grid via Web services that use a special class loader enabling code mobility: executable code can be uploaded to a HOC, allowing one to customize the HOC. Fractal simplifies the composition of components and the ProActive library offers a generator for automatically creating Web services from components composed with Fractal, as long as all the parameters of these services have primitive types.
Taking all the advantages of HOCs, ProActive and Fractal together, the obvious conclusion is that composing HOCs using Fractal and automatically exposing them as Web services on the grid via ProActive minimizes the required efforts for building complex grid systems. In this context, we solved the problem of exchanging code-carrying parameters in automatically generated Web services by integrating the HOC class loading mechanism into the ProActive library.
Computer-based Improvements of waste collection and public transport procedures are often a part of smart city initiatives. When we envision an ideal bus network, it will primarily connect the most crowded bus stops. Similarly, an ideal waste collection vehicle will arrive at every container exactly at the time when it is fully loaded. Beyond doubt, this will reduce traffic and support environmentally friendly intentions like waste separation, as it will make more containers manageable. A difficulty of putting that vision into practice is that vehicles cannot always be where they are needed. Knowing the best time for arriving at a position is not insufficient for finding the optimal route. Therefore, we compare four different approaches to optimized routing: Regensburg, Christchurch, Malaysia, and Bangalore. Our analysis shows that the best schedules result from adapting field-tested routes frequently based on sensor measurements and route optimizing computations.
In this paper, we present a new approach to determine the estimated time of arrival (ETA) for bus routes using (Deep) Graph Convolutional Networks (DGCNs). In addition we use the same DGCN to detect detours within a route. In our application, a classification of routes and their underlying graph structure is performed using Graph Learning. Our model leads to a fast prediction and avoids solving the vehicle routing problem (VRP) through expensive computations. Moreover, we describe how to predict travel time for all routes using the same DGCN Model. This method makes it possible not to use a more computationally intensive approximation algorithm when determining long travel times with many intermediate stops, but to use our network for an early estimate of the quality of a route. Long travel times, in our case result from the use of a call-bus system, which must distribute many passengers among several vehicles and can take them to places without a regular stop. For a case study, the rural town of Roding in Bavaria is used. Our training data for this area results from an approximation algorithm that we implemented to optimize routes, and to generate an archive of routes of varying quality simultaneously.
This work integrates two distinct research areas of parallel and distributed computing, (1) automatic loop parallelization, and (2) component-based Grid programming. The latter includes technologies developed within CoreGRID for simplifying Grid programming: the Grid ComponentModel (GCM) and Higher- Order Components (HOCs). Components support developing applications on the Grid without taking all the technical details of the particular platform type into account (network communication, heterogeneity, etc.). The GCMenables a hierarchical composition of programpieces and HOCs enable the reuse of component code in the development of new applications by specifying application-specific operations in a program via code parameters. When a programmer is provided, e. g., with a compute farm HOC, only the independent worker tasks must be described. But, once an application exhibits data or control dependences, the trivial farm is no longer sufficient. Here, the power of loop parallelization tools, like LooPo, comes into play: by embedding LooPo into a HOC, we show that these two technologies in combination facilitate the automatic transformation of a sequential loop nest with complex dependences (supplied by the user as a HOC parameter) into an ordered task graph, which can be processed on the Grid in parallel. This technique can significantly simplify GCM-based systems which combine multiple HOCs and other components. We use an equation system solver based on the successive overrelaxation method (SOR) as our motivating application example and for performance experiments.
Inverse problems are at the heart of many practical problems such as image reconstruction or nondestructive testing. A characteristic feature is their instability with respect to data perturbations. To stabilize the inversion process, regularization methods must be developed and applied. In this paper, we introduce the concept of filtered diagonal frame decomposition, which extends the classical filtered SVD to the case of frames. The use of frames as generalized singular systems allows a better match to a given class of potential solutions and is also beneficial for problems where the SVD is not analytically available. We show that filtered diagonal frame decompositions yield convergent regularization methods, derive convergence rates under source conditions and prove order optimality. Our analysis applies to bounded and unbounded forward operators. As a practical application of our tools, we study filtered diagonal frame decompositions for inverting the Radon transform as an unbounded operator on L2(R2).
The characteristic feature of inverse problems is their instability with respect to data perturbations. In order to stabilize the inversion process, regularization methods have to be developed and applied. In this work we introduce and analyze the concept of filtered diagonal frame decomposition which extends the standard filtered singular value decomposition to the frame case. Frames as generalized singular system allows to better adapt to a given class of potential solutions. In this paper, we show that filtered diagonal frame decomposition yield a convergent regularization method. Moreover, we derive convergence rates under source type conditions and prove order optimality under the assumption that the considered frame is a Riesz-basis.
Beim Projekt DeinHaus 4.0 Oberpfalz – TePUS (Telepräsenzroboter für die Pflege und Unterstützung von Schlaganfallpatientinnen und -patienten) handelt es sich um eine Längsschnittstudie im Mixed-Methods-Design zur Untersuchung telepräsenz und
appgestützter Angebote aus den Bereichen Pflege, Logopädie und Physiotherapie.
Telenursing bei Schlaganfall
(2022)
Eye tracking has proven to be a powerful tool in a variety of empirical research areas; hence, it is steadily gaining attention. Driven by the expanding frontiers of Artificial Intelligence and its potential for data analysis, eye tracking technology offers promising applications in diverse fields, from usability research to cognitive research. The education sector in particular can benefit from the increased use of eye tracking technology - both indirectly, for example by studying the differences in gaze patterns between experts and novices to identify promising strategies, and directly by using the technology itself to teach in future classrooms.
As with any empirical method, the results depend directly on the quality of the data collected. That raises the question of which parameters educators or researchers can influence to maximize the data quality of an eye tracker. This is the starting point of the present work: In an empirical study of eye tracking as an (educational) technology, we systematically examine factors that influence the data quality, such as illumination, sampling frequency, and head orientation - parameters that can be varied without much additional effort in everyday classroom or research use - using two human subjects, an artificial face, and the Tobii Pro Spectrum.
We rely on metrics derived from the raw gaze data, such as accuracy or precision, to measure data quality. The obtained results derive practical advice for educators and researchers, such as using the lowest sampling frequency appropriate for a certain purpose. Thereby, this research fills a gap in the current understanding of eye tracker performance and, by offering best practices, enables researchers or teachers to produce data of the highest quality possible and therefore best results when using eye trackers in laboratories or future classrooms.
A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer
(2019)
he Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implementation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.