Fakultät für Informatik
Refine
Year of publication
Document Type
- Conference Proceeding (90)
- Book (21)
- Article (peer reviewed) (11)
- Other (10)
- Contribution to a Periodical (9)
- Part of a Book (4)
- Doctoral Thesis (2)
- Working Paper (1)
Is part of the Bibliography
- no (148)
Keywords
- Speech Recognition (18)
- CFT (6)
- Software Development (6)
- Database (5)
- Microsoft (5)
- Sql server (5)
- Augmented Reality (AR) (4)
- Fault trees (4)
- Mobile Robot (4)
- Robotic Research (4)
Institute
Das Buch richtet sich an Studierende der Informatik oder verwandter Studiengänge und enthält Übungsaufgaben mit Lösungen aus Gebieten, die typischerweise in den ersten Semestern als Grundlagen behandelt werden. Ausgenommen ist der Bereich des Programmierens. Das Buch ergänzt den Grundkurs Informatik mit Übungen zu ausgewählten Kapiteln, ist aber auch in Kombination mit anderen Lehrbüchern verwendbar.
Das Buch bietet eine umfassende und praxisorientierte Einführung in die wesentlichen Grundlagen und Konzepte der Informatik. Es umfasst den Stoff, der typischerweise in den ersten Semestern eines Informatikstudiums vermittelt wird, vertieft Zusammenhänge, die darüber hinausgehen und macht sie verständlich. Die Themenauswahl orientiert sich an der langfristigen Relevanz für die praktische Anwendung. Praxisnah und aktuell werden die Inhalte für Studierende der Informatik und verwandter Studiengänge sowie für im Beruf stehende Praktiker vermittelt.
Fault and anomaly detection in district heating substations: A survey on methodology and data sets
(2023)
District heating systems are essential building blocks for affordable, low-carbon heat supply. Early detection and elimination of faults is crucial for the efficiency of these systems and necessary to achieve the low temperatures targeted for 4th generation district heating systems. Especially methods for fault and anomaly detection in district heating substations are currently of high interest, as faults in substations can be repaired quickly and inexpensively, and smart meter data are becoming widely available. In this paper, we review recent scientific publications presenting data-driven approaches for fault and anomaly detection in district heating substations with a focus on methods and data sets. Our review indicates that researchers use a wide variety of methods, mostly focusing on unsupervised anomaly detection rather than fault detection. This is due to a lack of labeled data sets, preventing the use of supervised learning methods and quantitative analysis. Together with the lack of publicly available data sets, this impedes the accurate comparison of individual methods. To overcome this impediment, increase the comparability of different methods and foster competition, future research should focus on establishing publicly available data sets, and industry-relevant metrics as benchmarks.
Mobility management is a key feature of mobile edge computing. We present an edge cloud infrastructure testbed to explore various mobility scenarios. The design objection of this testbed has been a flexible open platform based on commodity hardware that can easily be scaled with more edge devices and compute resources to perform various edge cloud experiments. As first experiments on our testbed, we have investigated the feasibility of task migration among edge devices caused by edge device overload and unpredictable user movements. We describe the migration process and present some measurements to demonstrate the feasibility.
Parameter free Non-intrusive Load Monitoring (NILM) algorithms are a major step toward real-world NILM scenarios. The identification of appliances is the key element in NILM. The task consists of identification of the appliance category and its current state. In this paper, we present a param- eter free appliance identification algorithm for NILM using a 2D representation of time series known as unthresholded Recurrence Plots (RP) for appliance category identification. One cycle of voltage and current (V-I trajectory) are transformed into a RP and classified using a Spacial Pyramid Pooling Convolutional Neural Network architecture. The performance of our approach is evaluated on the three public datasets COOLL, PLAID and WHITEDv1.1 and compared to previous publications. We show that compared to other approaches using our architecture no initial parameters have to be manually tuned for each specific dataset.
Stuttering is a complex speech disorder identified by repetitions, prolongations of sounds, syllables or words and blockswhile speaking. Specific stuttering behaviour differs strongly,thus needing personalized therapy. Therapy sessions requirea high level of concentration by the therapist. We introduce STAN, a system to aid speech therapists in stuttering therapysessions. Such an automated feedback system can lower the cognitive load on the therapist and thereby enable a more consistent therapy as well as allowing analysis of stuttering over the span of multiple therapy sessions.
Real-world domestic electricity demand datasets are the key enabler for developing and evaluating machine learning algorithms that facilitate the analysis of demand attribution and usage behavior. Breaking down the electricity demand of domestic households is seen as the key technology for intelligent smart-grid management systems that seek an equilibrium of electricity supply and demand. For the purpose of comparable research, we publish DEDDIAG, a domestic electricity demand dataset of individual appliances in Germany. The dataset contains recordings of 15 homes over a period of up to 3.5 years, wherein total 50 appliances have been recorded at a frequency of 1 Hz. Recorded appliances are of significance for load-shifting purposes such as dishwashers, washing machines and refrigerators. One home also includes three-phase mains readings that can be used for disaggregation tasks. Additionally, DEDDIAG contains manual ground truth event annotations for 14 appliances, that provide precise start and stop timestamps. Such annotations have not been published for any long-term electricity dataset we are aware of.
Mobile-access edge clouds provide distributed com-pute capacities for low-latency applications. 5G technology willpave the way for such mobile deployment scenarios. In thispaper, we propose an edge cloud infrastructure that supportslow-latency video analysis connected with bandwidth reductionfor a moving group of persons. As example, we consider a mobilebody camera scenario that monitors the situation in a certain areaand transmits it to an operations center. Our discussion focuseson three aspects: mobility support, low-latency video processing,and bandwidth reduction. For this, we propose a mobile edgecloud infrastructure with a central cloud. In order to optimizevideo processing we optimize the edge cloud device assignmentof the cameras depending on their movement by reassigning itto another cloud device. This requires live migration of ongoingvideo analysis between edge devices. Finally we discuss the useof a mobile central cloud.
The classical results of the binomial and negative binomial probability distribution are generalized by means of homogeneous Discrete Time Markov Chains to series of stochastically independent random trials. These have not only two possible outcomes but two groups of them -- different kinds of successes and failures with occurrence probabilities depending on the outcome of the previous trial. This generalization allows a uniform view of occupation time, first passage time and recurrence time. Our results are consequently derived and presented in matrix form, the probabilities as well as the moments. They can be applied to all Discrete Time Markov Chains, especially in computer capacity planning, performability and economics.
Zahlensysteme und binäre Arithmetik – Nachricht und Information – Codierung und Datenkompression – Verschlüsselung – Schaltalgebra, Schaltnetze und Elemente der Computerhardware – Rechnerarchitekturen – Rechnernetze – Betriebssysteme – prozedurale und objektorientierte Programmierung (C und Java) – Automatentheorie und formale Sprachen – Berechenbarkeit und Komplexität – Suchen und Sortieren – Bäume und Graphen – Software-Engineering – Datenbanken – Anwendungsprogrammierung im Internet (HTML, CSS, JavaScript und PHP) – Deep Learning mit neuronalen Netzen
Generating a more detailed understanding of domestic electricity demand is a major topic for energy suppliers and householders in times of climate change.
Over the years there have been many studies on consumption feedback systems to inform householders, disaggregation algorithms for Non-Intrusive-Load-Monitoring (NILM), Real-Time-Pricing (RTP) to promote supply aware behavior through monetary incentives and appliance usage prediction algorithms. While these studies are vital steps towards energy awareness, one of the most fundamental challenges has not yet been tackled: Automated detection of start and stop of usage cycles of household appliances. We argue that most research efforts in this area will benefit from a reliable segmentation method to provide accurate usage information.
We propose a SVM-based segmentation method for home appliances such as dishwashers and washing machines. The method is evaluated using manually annotated electricity measurements of five different appliances recorded over two years in multiple households.
Time series are series of values ordered by time. This kind of data can be found in many real world settings. Classifying time series is a difficult task and an active area of research. This paper investigates the use of transfer learning in Deep Neural Networks and a 2D representation of time series known as Recurrence Plots. In order to utilize the research done in the area of image classification, where Deep Neural Networks have achieved very good results, we use a Residual Neural Networks architecture known as ResNet. As preprocessing of time series is a major part of every time series classification pipeline, the method proposed simplifies this step and requires only few parameters. For the first time we propose a method for multi time series classification: Training a single network to classify all datasets in the archive with one network. We are among the first to evaluate the method on the latest 2018 release of the UCR archive, a well established time series classification benchmarking dataset.
An apparatus and method for analyzing availability of a system including subsystems each having at least one failure mode with a corresponding failure effect on the system are provided. The apparatus includes a degraded mode tree generation unit configured to automatically generate a degraded mode tree. The degraded mode tree includes at least one degraded mode element representing a degraded system state of the system that deviates from a normal operation state of the system based on a predetermined generic system meta model stored in a database including Failure Mode and Effects Analysis elements representing subsystems, failure modes, failure effects, and diagnostic measures. The apparatus also includes a processor configured to evaluate the generated degraded mode tree for calculation of the availability of the system.
A method for automated qualification of a safety critical system including a plurality of components is provided. A functional safety behavior of each component is represented by an associated component fault tree element. The method includes automatically performing a failure port mapping of output failure modes to input failure modes of component fault tree elements based on a predetermined generic fault type data model stored in a database.
A method for automated recertification of a safety critical system with at least one altered functionality is provided. The method includes providing a failure propagation model of the safety critical system. The method also includes updating the failure propagation model of the safety critical system according to the at least one altered functionality using inner port dependency traces between inports and outports of a failure propagation model element representing the at least one altered functionality. The method includes calculating top events of the updated failure propagation model, and comparing the calculated top events with predetermined system requirements to recertify the safety critical system.
A method for integrated model-based safety analysis includes integrating a safety analysis model into a system development model of a safety-critical system. The system development model includes model components. The safety analysis model models a failure logic separately for each of the model components. The method includes representing dependencies among the model components with a design structure matrix. The design structure matrix represents each of the model components with a row and a column and shows dependencies between model components with corresponding entries. The method also includes sequencing the design structure matrix, and identifying at least one dependency loop and loop components in the sequenced design structure matrix. The loop components are part of the at least one dependency loop.
Skriptum Geschäftsprozesse
(2012)
The present paper examines which benefit an automated documentation of the IT infrastructure can have for the configuration management process of ITIL, and whether it is possible to fully automate documentation. The result is the conclusion that the documentation process can be fully automated. It follows from this analysis that the automated documentation can “only” supply information for the ITIL configuration management, respectively for the CMDB.
The strong technical orientation of the previous multimedia evolution shows a lack of theoretical foundation. Both during the evolution and application of multimedia technology, well-founded theoretical concepts are missing. The intention of this paper is to show categories of different information representations and interaction types and their strengths in representing contents. A classification of multimedia information and interaction types is given also as an
overview of the problem fields of multimedia, especially in the field of learning theory. This classification is used to give some guidelines for using and combining multimedia contents in multimedia systems.
This paper is an interdisciplinary synopsis about multimedia learning systems and mass information systems, and presents an action model for construction of these systems and the embedding of this model in an existing concept of system planning.
First step in the paper is the containment and definition of the term ‘multimedia’ on the basis of general criterions of the human-machine-human communication. Result is a taxonomy to classify scientific research areas in this field.
The second part follows an examination of different concepts of learning theories by means of a five dimensional raster. Output of this examination are two models fitting for computer based learning in general.
The third part discusses information presentation and interaction possibilities at the human - machine - interface. Therefore a consistent raster is acquired, where psychological perception parameters and technical parameters of information presentations and interaction possibilities are examined. This results in the so-called storyboarding, which is the description of a process adapted from theater, film and TV for developing a multimedia script.
The combination of learntheoretical concepts with the process of storyboarding of multimedia applications and the embedding in a system planning action model is described in the fourth part. The result of this last part shows a phase concept for system planning of multimedia learning systems and mass information systems.
Die Arbeit ist eine Anleitung zur Planung und Erstellung von IT-Systemen, die auch Filme, Sprache, Musik, Animationen und virtuelle Welten – also die gesamte Palette multimedialer Darstellungen – beinhalten. Solche Programme werden hauptsächlich in drei Bereichen eingesetzt: Im Spielebereich (z.B. Autorennen, Erlebnisspiele), im Aus- und Weiterbildungsbereich (z.B. multimediale Lernprogramme) und im Bereich von elektronisch unterstützten Informationssystemen (z.B. Präsentationen, Produktvorstellungen und Produktkataloge u.a. auch auf CD-ROM oder Kiosksysteme wie etwa Informationsterminals auf Messen u.ä.).
Die Arbeit beschäftigt sich vorwiegend mit Lernsystemen. Das Schwierige an der Verwendung multimedialer Darstellungen wie Film, Musik usw. ist, dass in die "trockene" Programmierung Aspekte wie Regie, Dramaturgie, psychologische und didaktische Aspekte einfließen. Daher ist die interdisziplinäre Sichtweise bei der Entwicklung multimedialer Systeme besonders wichtig. Die Arbeit betrachtet die Problematik also nicht techniklastig (obwohl natürlich die technischen Aspekte nicht außer Acht gelassen werden), sondern nutzerorientiert.
Die Arbeit ist für drei Bereiche bedeutend: erstens für den Bereich der Wissenschaft(stheorie), zweitens für den Aus- und Weiterbildungssektor und drittens für die Entwickler von multimedialen Systemen.
Aus der Sicht der Wissenschaft liefert die Arbeit eine klare Einteilung für multimediale Arbeitsbereiche. Mögliche künftige Forschungsbereiche werden aufgezeigt und begriffliche Unklarheiten offengelegt und ausgeräumt. Weiters wird die interdisziplinäre Bandbreite der Thematik geschildert und damit eine umfassende Darstellung der Problematik vorgenommen. In diesem Sinn kann die Arbeit fast als Lexikon gesehen werden.
Aus der Sicht des Aus- und Weiterbildungssektors dient diese Arbeit als Zusammenfassung bestehender Lernkonzepte, in der ein eigenständiger, einfacher Raster in Hinblick auf multimediale Systeme entwickelt wird. Dabei werden Erfahrungen aus mehreren multimedialen Programmen eingearbeitet, die vom Autor am Institut für Wirtschaftsinformatik (mit)entwickelt und evaluiert worden sind. Eine Erkenntnis ist, dass der Wissensstand des Benutzers auf dem Lerngebiet eine entscheidende Rolle spielt. Lernsysteme für Anfänger müssen anders gestaltet werden als für Fortgeschrittene. Das wie wird in der Arbeit beschrieben.
Aus der Sicht der Entwickler von multimedialen Systemen ist die Arbeit von Bedeutung, da sie die Unterschiede bei der Entwicklung im Vergleich zu traditionellen Programmen herausarbeitet und dafür eine Vorgangsweise anbietet. Die Vorgangsweise ist eine Art multimediales Drehbuch, das als Storyboard bezeichnet wird. In der Arbeit wird beschrieben wie die einzelnen dramaturgischen, didaktischen und psychologischen Aspekte in das Storyboard eingearbeitet werden und letztlich das System in einem entsprechenden Umfeld eingesetzt werden kann.
The use of multimedia can significantly improve the quality of case studies, especially with regard to their presentation of reality. The development of multimedia case studies poses a challenge of both a creative and a technical nature. This paper describes the various stages of the development of the case study itself as well as an action model which supports the application of didactical aims in a multimedia case study. This paper then describes experiences with a multimedia case study used at the Department of Information Systems for training students in data processing for business purposes. The report includes a description of how the case study was integrated as a didactic element in a university course, with special emphasis being given to theoretical aspects of presentation and learning. Additionally, a description of the case study and its development rounds off the article. The experiences were gained within the framework of an explorational, empirical study whose results are presented at the end of this paper and form the basis of suggestions for how the case study could be developed further.
This contribution describes experiences with a multimedia case study used at the Department of Information Systems for training students in data processing for business purposes. The report includes a description of how the case study was integrated as a didactic element in a university course, with special emphasis being given to theoretical aspects of presentation and learning. Additionally, a description of the case study and its development rounds off the article. The experiences were gained within the framework of an explorational, empirical study whose results are presented at the end of this paper and form the basis of suggestions for how the case study could be developed further.
The use of multimedia can significantly improve the quality of case studies, especially with regard to their presentation of reality. The development of multimedia case studies poses a challenge of both a creative and a technical nature. This paper describes the various stages of the development of the case study itself as well as an action model which supports the application of didactical aims in a multimedia case study.
Ergebnis der Arbeit ist eine interdisziplinäre Zusammenschau zum Erkenntnisobjekt "multimediale Lern- und Masseninformationssysteme" und darauf aufbauend die Vorstellung eines Vorgehensmodell zur Konstruktion dieser Systeme und die Einbettung dieses Modells in ein bestehendes Systemplanungskonzept.
Dazu wird in der Arbeit zunächst der Begriff Multimedia anhand von allgemeinen Kriterien der Mensch-Maschine-Mensch - Kommunikation eingegrenzt und definiert. Ergebnis ist eine Systematik zur Einordnung von wissenschaftlichen Forschungsgebieten in diesem Bereich. Im zweiten Schritt erfolgt eine Untersuchung lerntheoretischer Konzepte anhand eines fünfdimensionalen Rasters. Ergebnis sind zwei Modelle, die sich zur elektronisch unterstützten Wissensvermittlung im allgemeinen eignen. Im dritten Schritt erfolgt eine Auseinandersetzung mit Informationsdarstellungen und Interaktionsmöglichkeiten an der Mensch-Maschine-Schnittstelle. Dazu wird ein konsistentes Raster erarbeitet, anhand dessen wahrnehmungspsychologische und technische Parameter von Informationsdarstellungen und Interaktionsmöglichkeiten untersucht werden. Ergebnis ist die Beschreibung von einem aus der Theater-, Film- und Fernsehbranche adaptierten Prozeß zur Erstellung eines multimedialen Drehbuchs, dem Storyboard. Der vierte Schritt beschreibt die Kombination der lerntheoretischen Konzepte mit dem Prozeß des Storyboardings multimedialer Anwendungen und die Einbettung in ein systemplanerisches Vorgehen dazu. Ergebnis ist ein Phasenkonzept zur Systemplanung multimedialer Lern- und Masseninformationssysteme.
The growing size and complexity of software in embedded systems poses new challenges to the safety assessment of embedded control systems. In industrial practice, the control software is mostly treated as a black box during the system's safety analysis. The appropriate representation of the failure propagation of the software is a pressing need in order to increase the accuracy of safety analyses. However, it also increase the effort for creating and maintaining the safety analysis models (such as fault trees) significantly. In this work, we present a method to automatically generate Component Fault Trees from Continuous Function Charts. This method aims at generating the failure propagation model of the detailed software specification. Hence, control software can be included into safety analyses without additional manual effort required to construct the safety analysis models of the software. Moreover, safety analyses created during early system specification phases can be verified by comparing it with the automatically generated one in the detailed specification phased.
INSiDER: Incorporation of system and safety analysis models using a dedicated reference model
(2016)
In order to enable model-based, iterative design of safety-relevant systems, an efficient incorporation of safety and system engineering is a pressing need. Our approach interconnects system design and safety analysis models efficiently using a dedicated reference model. Since all information are available in a structured way, traceability between the model elements and consistency checks enable automated synchronization to guarantee that information within both kind of models are consistent during the development life-cycle.
Safety assurance is a major challenge in the design of today's complex embedded systems and future Cyber-physical systems. Especially changes in a system's architectural design invalidate former safety analyses and require an adaptation of related safety analysis models in order to restore consistency. In this work, we present an approach for automatically generating mappings between failure ports in compositional safety analysis models. This way, automatic and system-wide safety analyses are enabled that can be easily repeated after making modifications to the system's architecture. We demonstrate the feasibility of our approach using a case study from the automotive domain.
Automating compositional safety analysis using a failure type taxonomy for component fault trees
(2016)
Safety assurance is a major challenge in the design of today’s complex embedded systems and future Cyber-physical systems. Changes in a system’s architectural design invalidate former safety analyses and require a manual adaptation of related safety analysis models in order to restore consistency. In this work, we present an approach for automating the compositional assembly of Component Fault Trees by automatically generating mappings between their input and output failure modes. Therefore, we propose a taxonomy of failure types for annotating model elements and deriving a model of the failure propagation. This way, automatic and system-wide safety analyses can be executed and easily repeated after making modifications to the system’s architecture. We demonstrate the feasibility of our approach using an example ethylene vaporization unit from an industrial domain.
Safety assurance is a major challenge in the design of modern embedded systems that has become increasingly difficult in recent years. Growing system sizes and the rise of Cyber-Physical systems confront safety engineers with large sets of configurations to be analyzed. Current approaches are usually carried out at design time and do not address the need for automated assessments in the field. With Component Fault Trees (CFTs) there exists a component-based methodology that enables an efficient modular composition of safety artifacts. The combined model is a system-level CFT that can be analyzed by means of popular Fault Tree Analysis techniques that are widely accepted in the industry. However, when composing models, their interfacing elements must be connected manually which impedes the automation of the procedure. In this work, we introduce the notion of flow types that represent a particular kind of component interaction and define a taxonomy of related failure behavior. By annotating CFTs with types, a machine-readable vocabulary is provided that allows for an automated interconnection of their interfaces. This way, the automatic composition of models according to system architecture is enabled, allowing for automated safety assessments on system-level. We demonstrate the feasibility of our approach using an example ethylene vaporization unit.
In safety analysis for safety-critical embedded systems, methods such as FMEA and fault trees (FT) are strongly established in practice. However, the current shift towards model-based development has resulted in various new safety analysis methods, such as Component Integrated Fault Trees (CFT). Industry demands to know the benefits of these new methods. To compare CFT to FT, we conducted a controlled experiment in which 18 participants from industry and academia had to apply each method to safety modeling tasks from the avionics domain.
Although the analysis of the solutions showed that the use of CFT did not yield a significantly different number of correct or incorrect solutions, the participants subjectively rated the modeling capacities of CFT significantly higher in terms of model consistency, clarity, and maintainability. The results are promising for the potential of CFT as a model-based approach.
(Background) Empirical Software Engineering (SE) strives to provide empirical evidence about the pros and cons of SE approaches. This kind of knowledge becomes relevant when the issue is whether to change from a currently employed approach to a new one or not. An informed decision is required and is particularly important in the development of safety-critical systems. For example, for the safety analysis of safety-critical embedded systems, methods such as Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) are used. With the advent of model-based systems and software development, the question arises whether safety engineering methods should also be adopted. New technologies such as Component Integrated Fault Trees (CFT) come into play. Industry demands to know the benefits of these new methods over established ones such as Fault Trees (FT). (Methods) For the purpose of comparing CFT and FT with regard to the capabilities of the safety analysis methods (such as quality of the results) and to the participants' rating of the consistency, clarity, and maintainability of the methods, we designed a comparative study as a controlled experiment using a within-subject design. The experiment was run with seven academic staff members working towards their PhD. The study was replicated with eleven domain experts from industry. (Results) Although the analysis of the tasks' solutions showed that the use of CFT did not yield a significantly different number of correct or incorrect solutions, the participants rated the modeling capacities of CFT higher in terms of model consistency, clarity, and maintainability. (Conclusion) From this first evidence, we conclude that CFT have the potential of being beneficial for companies looking for a safety analysis approachfor projects using model-based development.
Safety assurance is a major challenge in the design of complex embedded and Cyber-physical Systems. Especially, changes and adoptions during the design or run-time of an embedded system invalidate former safety analyses and require an adaptation of the system's safety analysis models. In this paper, we present a methodology to fill up empty safety analysis artifacts in component fault trees using so-called inner port dependency traces to describe failure propagation. Thus, enabling a imprecise but rapid safety analysis of an entire system at early development stages or during system run-time for the automated certification of Cyber-physical Systems. We evaluate our approach using case study from the automotive domain.
Identifying drawbacks or insufficiencies in terms of safety is important also in early development stages of safety critical systems. In industry, development artefacts such as components or units, are often reused from existing artefacts to save time and costs. When development artefacts are reused, their existing safety analysis models are an important input for an early safety assessment for the new system, since they already provide a valid model. Component fault trees support such reuse strategies by a compositional horizontal approach. But current development strategies do not only divide systems horizontally, e.g., By encapsulating different functionality into separate components and hierarchies of components, but also vertically, e.g. Into software and hardware architecture layers. Current safety analysis methodologies, such as component fault trees, do not support such vertical layers. Therefore, we present here a methodology that is able to divide safety analysis models into different layers of a systems architecture. We use so called Architecture Layer Failure Dependencies to enable component fault trees on different layers of an architecture. These dependencies are then used to generate safety evidence for the entire system and over all different architecture layers. A case study applies the approach to hardware and software layers.
The growing complexity of safety-critical embedded systems is leading to an increased complexity of safety analysis models. Often used fault tolerance mechanisms have complex failure behavior and produce overhead compared to systems without such mechanisms. The question arises whether the overhead for fault tolerance is acceptable for the increased safety of a system. Manually modeling the timing behavior is cost intensive and error prone. Current approaches of safety analysis and execution time analysis are not able to reflect the timing behavior of complex mechanisms according to failures. In this paper, we describe an approach that combines safety analysis models with execution times to extract different execution times for different failure conditions. This provides a detailed view on the safety behavior in combination with the produced overhead and allows to find and certify appropriate fault tolerance mechanisms.
Embedded real-time systems are growing in complexity, which goes far beyond simplistic closedloop functionality. Current approaches of worst-case execution time (WCET) analysis are used to verify deadlines of such systems, especially when they are safety critical. These approaches calculate or measure WCET as a single value that is expected as an upper bound for a system's execution time. Overestimations are taken into account to make this upper bound a safe bound, but modern processor architectures with caches, multi-threading, and instruction pipelines often expand those overestimations for safe upper bounds into unrealistic areas. Some approaches try to overcome this problem by calculating multiple upper bounds and argue that each single upper bound will hold for a certain probability (probabilistic worst-case execution time). Even though some of them tackle the problem of obtaining reliable probabilistic values for such upper bounds, more effort is required. Therefore, we present in this paper how probabilities of safety analysis models can be combined with elements of system development models to calculate a probabilistic worst-case execution time. This approach can be applied to systems that use mechanisms belonging to the area of fault tolerance, since such mechanisms are usually quantified in safety analyses to certify the system as being highly reliable or safe.
Embedded real-time systems are growing in complexity, which goes far beyond simplistic closed-loop functionality. Current approaches for worst-case execution time (WCET) analysis are used to verify the deadlines of such systems. These approaches calculate or measure the WCET as a single value that is expected as an upper bound for a system’s execution time. Overestimations are taken into account to make this upper bound a safe bound, but modern processor architectures expand those overestimations into unrealistic areas. Therefore, we present in this paper how of safety analysis model probabilities can be combined with elements of system development models to calculate a probabilistic WCET. This approach can be applied to systems that use mechanisms belonging to the area of fault tolerance, since such mechanisms are usually quantified using safety analyses to certify the system as being highly reliable or safe. A tool prototype implementing this approach is also presented which provides reliable safe upper bounds by performing a static WCET analysis and which overcomes the frequently encountered problem of dependence structures by using a fault injection approach.
The number of embedded systems in our daily lives that are distributed, hidden, and ubiquitous continues to increase. Many of them are safety-critical. To provide additional or better functionalities, they are becoming more and more complex, which makes it difficult to guarantee safety. It is undisputed that safety must be considered before the start of development, continue until decommissioning, and is particularly important during the design of the system and software architecture. An architecture must be able to avoid, detect, or mitigate all dangerous failures to a sufficient degree. For this purpose, the architectural design must be guided and verified by safety analyses. However, state-of-the-art component-oriented or model-based architectural design approaches use different levels of abstraction to handle complexity. So, safety analyses must also be applied on different levels of abstraction, and it must be checked and guaranteed that they are consistent with each other, which is not supported by standard safety analyses. In this paper, we present a consistency check for CFTs that automatically detects commonalities and inconsistencies between fault trees of different levels of abstraction. This facilitates the application of safety analyses in top-down architectural designs and reduces effort.
The open and cooperative nature of Cyber-Physical Systems (CPS) poses new challenges in assuring dependability. The DEIS project (Dependability Engineering Innovation for automotive CPS. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732242, see http://www.deis-project.eu) addresses these challenges by developing technologies that form a science of dependable system integration. In the core of these technologies lies the concept of a Digital Dependability Identity (DDI) of a component or system. DDIs are modular, composable, and executable in the field facilitating (a) efficient synthesis of component and system dependability information over the supply chain and (b) effective evaluation of this information in-the-field for safe and secure composition of highly distributed and autonomous CPS. The paper outlines the DDI concept and opportunities for application in four industrial use cases.
Efficient safety analyses of complex software intensive embedded systems are still a challenging task. This article illustrates how model-driven development principles can be used in safety engineering to reduce cost and effort. To this end, the article shows how well accepted safety engineering approaches can be shifted to the level of model-driven development by integrating safety models into functional development models. Namely, we illustrate how UML profiles, model transformations, and techniques for multi language development can be used to seamlessly integrate component fault trees into the UML.
This paper shows the experiences made with a multimedia based case study in academic education. The case study has been used within three courses of business process engineering. It was compared to a case study based solely on text. 13 assumptions have been evaluated. The main conclusion is that the multimedia based case study is much more practice oriented than a text based case study. Also the solutions of the students, which did the multimedia based case study, have been of higher quality. But on the other hand the expectations of the students to a multimedia based system are hard to meet. Based on these experiences some hints in further developing of multimedia based case studies are formulated.
Get up and running on Microsoft SQL Server 2016 in no time with help from this thoroughly revised, practical resource. The book offers thorough coverage of SQL management and development and features full details on the newest business intelligence, reporting, and security features.
Filled with new real-world examples and hands-on exercises, Microsoft SQL Server 2016: A Beginner's Guide, Sixth Edition, starts by explaining fundamental relational database system concepts.
From there, you will learn how to write Transact-SQL statements, execute simple and complex database queries, handle system administration and security, and use the powerful analysis and BI tools. XML, spatial data, and full-text search are also covered in this step-by-step tutorial.
· Revised from the ground up to cover the latest version of SQL Server
· Ideal both as a self-study guide and a classroom textbook
· Written by a prominent professor and best-selling author
Get Started on Microsoft SQL Server 2012 in No TimeLearn to use all of the powerful features available in SQL Server 2012 quickly and easily.
Microsoft SQL Server 2012: A Beginner's Guide explains the fundamentals of each topic alongside examples and tutorials that walk you through real-world database tasks.
Install SQL Server 2012, construct high-performance databases, use powerful Transact-SQL statements, create stored procedures and triggers, and execute simple and complex database queries.
Performance tuning, Database Engine security, Business Intelligence, and XML are also covered.
Set up, configure, and maintain SQL Server 20012 Build and manage database objects using Transact-SQL statements Create stored procedures and user-defined functionsOptimize database performance, availability, and reliabilityImplement solid security using authentication, encryption, and authorization
Automate tasks using SQL Server AgentCreate reliable data backups and perform flawless system restoresUse all-new SQL Server 2012 Business Intelligence, development, and administration toolsLearn in detail the SQL Server XML technology (SQLXML)
Get Started on Microsoft SQL Server 2008 in No TimeLearn to use all of the powerful features available in SQL Server 2008 quickly and easily.
Microsoft SQL Server 2008: A Beginner's Guide explains the fundamentals of each topic alongside examples and tutorials that walk you through real-world database tasks.
Install SQL Server 2008, construct high-performance databases, use powerful Transact-SQL statements, create stored procedures and triggers, and execute simple and complex database queries. Performance tuning, Database Engine security, Business Intelligence, and XML are also covered.
Set up, configure, and maintain SQL Server 2008
Build and manage database objects using Transact-SQL statements Create stored procedures and user-defined functionsOptimize database performance, availability, and reliability
Implement solid security using authentication, encryption, and authorization Automate tasks using SQL Server Agent
Create reliable data backups and perform flawless system restores
Use all-new SQL Server 2008 Business Intelligence, development, and administration toolsLearn in detail the SQL Server XML technology (SQLXML)
Deploy and manage SQL Server 2005 with easeLearn to use all the powerful features available in SQL Server 2005 from this straightforward, hands-on guide.
Set up SQL Server 2005, automate system administration tasks, execute simple and complex database queries, and use the robust analysis, business intelligence, and reporting tools. Troubleshooting, data partitioning, replication, and query optimization are also covered.
With SQL Server 2005: A Beginner's Guide, you'll be able to set up a secure, reliable, and productive data management platform in no time.
Essential Skills for Database Professionals
- Install and customize SQL Server 2005
- Create, alter, and remove database objects with Transact-SQL statements
- Use SQL Server as a native XML database system - Tune your database system for optimal performance
- Use the new SQL Server Management Studio tool for executing and analyzing ad hoc queries
- Retrieve data from more than one source using join operations and SELECT statements
- Secure your database using two different authentication modes--Windows and mixed
- Restore databases using transaction logs and backup and recovery methods
- Streamline system administration tasks using the SQL Server Agent service tool
- Analyze and manage information stored in a data warehouse with Microsoft Analysis Services